Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 122/467 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • SPARC SuperCluster Papers

    - by user12616590
    Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: A Technical Overview of the Oracle SPARC SuperCluster T4-4SPARC SuperCluster T4-4 is a high performance, multi-purpose engineered system that has been designed, tested and integrated to run a wide array of enterprise applications. It is well suited for multi-tier enterprise applications with Web, database and application components. This 20-page paper discusses the components and technical characteristics of this product. SPARC SuperCluster T4-4 Platform Security Principles and CapabilitiesThe security capabilities designed into the SPARC SuperCluster, and architectural, deployment, and operational best practices for taking advantage of them. Consolidating Oracle E-Business Suite on Oracle’s SPARC SuperClusterThis Oracle Optimized Solution describes the implementation and use of SPARC SuperCluster as a consolidation platform for E-Business Suite in 30 pages. Oracle Optimized Solution for Oracle PeopleSoft Human Capital Management on SPARC SuperClusterThe Oracle Optimized Solution for PeopleSoft Human Capital Management on SPARC SuperCluster is the industry's only proven, tested, applications-to-disk solution that maintains excellence managing absences, optimizing collaborative activities, streamlining knowledge and honing processes; 31 pages. I hope you find some of those papers useful.

    Read the article

  • how to store and retrieve/generate UI?

    - by thindery
    I'm working on a site that will have hundreds, and eventually thousands, of paper products that users can customize online. Here is a very simple sample of what needs to be generated based on the product id: demo. This is a very simple version. I plan on replacing text fields with prettier elements(like the slider on tab 3). I imagine most of this can be achieved via jquery. So basically a product will have multiple pages(tabs), with multiple form elements on each page. I've never done a large scale project like this before and I am looking for ideas/suggestions for how I can store the info for each product that needs to be generated to create the UI. For each product, I need to store how many pages there are, what form fields are on each page, and the order of the fields on the page. As well as setting default text values and form options(font size, etc). Then with all this info stored somewhere, I can have the web app retrieve it and generate the UI with text fields, sliders, and other jquery-ish form enhancements, for that particular product. Can anyone toss out some suggestions, links, blogs, tutorials? I'm not really sure where to begin with this or what I need to start to investigate. I have experience with php, mysql, javascript, jquery, html, css, and that is really about it. I'm open to learning(and would enjoy exploring) new frameworks, programming, etc that will really get this web app working correctly, efficiently, and effectively. Maybe I should start looking into a mvc framework? like i said, i really have no idea what is the best approach. please let me know your suggestions!

    Read the article

  • MVC 4 Authentication

    - by Aligned
    First: After searching for awhile to figure out what’s new/different with MVC 4 and forms authentication, this is the best article I've found on the subject: http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx Some quotes from the article: “The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership” "WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)" “If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things:” “Universal Providers do not work with Simple Membership.” ~ this post (look for Bob.at.SBS’s answer) says Universal Providers is not needed for MVC 4 to work in Azure)   I've been trying to figure out the Forms Authentication in MVC4. It's different than the past approach (aspnet_regsql). If you do file new project -> MVC 4 -> internet application, you get a really nice template with the controller and model setup for you. However, the tables are different than using aspnet_regsql and the ASP.Net Configuration tool (WSAT) wasn’t connecting to the data I had (it was creating an App_Data/aspnet.mdf file, which I didn’t see right away). Points of Note The database tables are created in the SimpleMembershipInitializer class, when you first run your app using Entity Framework 5 migration functionality. The tables created are webpages_Membership, webpages_OAuthMembership, webpages_Roles, webpages_UsersInRoles, UserProfile. Web.config settings don’t seem to be needed.   Scott Hanselman on Universal Providers was also useful if not somewhat out dated. Universal Providers and SimpleMembership are not compatible. http://www.asp.net/web-pages/tutorials/security/16-adding-security-and-membership – walk-through

    Read the article

  • Can't mount my USB Drive

    - by user996056
    ~> sudo mount /dev/sdb1 /media/Disk Did not find any restart pages in $LogFile and it was not empty. The file system wasn't safely closed on Windows. Fixing. I have a USB Hard Disk and Windows can't detect it. So I tried to open it up on Ubuntu using gparted. Gparted detects the NTFS partition, so everything seems to be fine (note, though, that the total size of the files in this disk is over 1 TB). I tried to mount it using: sudo mount /dev/sdb1 /media/Disk But I got: Did not find any restart pages in $LogFile and it was not empty. The file system wasn't safely closed on Windows. Fixing. Then, the process just sits there blinking. Any ideas on how to fix it? It's taking forever ( 10 minutes), should I wait or cancel it and do something else? Thank you in advance.

    Read the article

  • SEO consideration for duplicate sites

    - by Malk
    I am building a brochure-ware website for a company that sells products all across the world. They need the site to ask the user what region they are in before using the site; there are 5 regions. This is because there are different products offered to different regions and each region may or may not want to customize their own content. However, at launch and likely forever, most of the pages will be the exact same minus what is listed in the footer and in the product selection menu. My question is how should I structure the sitemap for this site for best SEO? Should I be concerned with duplicate content penalties and/or cannibalizing the site's presence on the SERP? Some considerations: The client wants to be able to print links directly to regional specific content bypassing any prompt for the user to select a region (to ensure they land on the target page). The client cannot have a 'default' region so the user must have a region specified "Clean" urls are important, but there is wiggle room The client does not want each region to have its own domain There will be a link on the page to allow users to specify a different region The client is not concerned with localization ...at this time Some products are available in multiple regions A quick list of options I am considering: www.site.com/region/page region.site.com/page www.site.com/page?region (no cookie, pages require the parameter. If visited without; the user must select a region) www.site.com/page (using cookie and a splash screen if needed; could pass parameter in to set the region for direct linking) Thanks in advance for your advice.

    Read the article

  • Ubuntu is unresponsive and scrolling is very jerky

    - by peterpipe
    I'm new to Ubuntu. I've been using 12.04LTS for over a week now. For 3 days it was great but now I'm having a number of problems. I'm using an Acer Aspire one D255 it has a 250GB HDD and a 1GB Memory. My browser is Firefox. Pages freeze occasionally and I wait for over 10 minutes and have to use the computer's 'off' button. Pages crash. On facebook some pictures don't load and I have to close down and restart a couple of times. Scrolling can sometimes be very jerky. Downloading software has become a problem and I have a 'no entry' sign in the top taskbar. I've looked at this page for some help but the words used and the explanations are not helpful to someone like me who has little knowledge of this. Please - 'Ubuntu for dummies'. I'd like to continue using Ubuntu but am close to giving up.

    Read the article

  • In facebook pictures bdon't always load properly and scrolling is very jerky

    - by peterpipe
    I'm new to Ubuntu. I've been using 12.04LTS for over a week now. For 3 days it was great but now I'm having a number of problems. I'm using an Acer Aspire one D255 it has a 250GB HDD and a 1GB Memory. My browser is Firefox. Pages freeze occasionally and I wait for over 10 minutes and have to use the computer's 'off' button. Pages crash. On facebook some pictures don't load and I have to close down and restart a couple of times. Scrolling can sometimes be very jerky. Downloading software has become a problem and I have a 'no entry' sign in the top taskbar. I've looked at this page for some help but the words used and the explanations are not helpful to someone like me who has little knowledge of this. Please - 'Ubuntu for dummies'. I'd like to continue using Ubuntu but am close to giving up.

    Read the article

  • Can using div with width = 0px affect SEO? [closed]

    - by user989084
    Possible Duplicate: Does google always downrank pages with hidden texts Right now I'm working on my new website and I'm really concerned about SEO since the old version of my site(which is from a script that is unusable now) has PR of 4 and I want to lose it So here is my question There is a panel that has 4 tabs Each tabs has tag which has a href like "/box-page/tab/2" and when javascript is not activated it will go this page and shows the corresponding tab and if it's activated it will just make a simple animation to show the other tab There are four boxes(and for tabs) and since I needed to fix the height of the panel I had to use width: 0 for the rest of tabs to keep the height of the box the same as the longest one and inside these boxes(which have width: 0) there are some information that can be indexed by google So as you know google doesn't have javascript and it will go to /box-page/tab/2 and /box-page/tab/3 and ... in all of these pages the information is the same but with different box showing up in the page So here is my question Does google penalize using a div with width: 0px? And if not does it just ignore the content of the div with width 0?(Which is perfect for me ^^) Thanks

    Read the article

  • search a collection for a specific keyword

    - by icelated
    What i want to do is search a hashset with a keyword.. I have 3 classes... main() Library Item(CD, DVD,Book classes) In library i am trying to do my search of the items in the hashsets.. In Item class is where i have the getKeywords function.. here is the Items class... import java.io.PrintStream; import java.util.Collection; import java.util.*; class Item { private String title; private String [] keywords; public String toString() { String line1 = "title: " + title + "\n" + "keywords: " + Arrays.toString(keywords); return line1; } public void print() { System.out.println(toString()); } public Item() { } public Item(String theTitle, String... theKeyword) { this.title = theTitle; this.keywords = theKeyword; } public String getTitle() { return title; } public String [] getKeywords() { return keywords; } } class CD extends Item { private String artist; private String [] members; // private String [] keywords; private int number; public CD(String theTitle, String theBand, int Snumber, String... keywords) { super(theTitle, keywords); this.artist = theBand; this.number = Snumber; // this.keywords = keywords; } public void addband(String... member) { this.members = member; } public String getArtist() { return artist; } public String [] getMembers() { return members; } // public String [] getKeywords() // { // return keywords; //} public String toString() { return "-Music-" + "\n" + "band: " + artist + "\n" + "# songs: " + number + "\n" + "members: " + Arrays.toString(members) + "\n" + super.toString() // + "keywords: " + Arrays.toString(keywords) + "\n" + "\n" ; } public void print() { System.out.println(toString()); } } class DVD extends Item { private String director; private String [] cast; private int scenes; // private String [] keywords; public DVD(String theTitle, String theDirector, int nScenes, String... keywords) { super(theTitle, keywords); this.director = theDirector; this.scenes = nScenes; // this.keywords = keywords; } public void addmoviecast(String... members) { this.cast = members; } public String [] getCast() { return cast; } public String getDirector() { return director; } // public String [] getKeywords() // { // return keywords; // } public String toString() { return "-Movie-" + "\n" + "director: " + director + "\n" + "# scenes: " + scenes + "\n" + "cast: " + Arrays.toString(cast) + "\n" + super.toString() // + "keywords: " + Arrays.toString(keywords) + "\n" + "\n" ; } public void print() { System.out.println(toString()); } } class Book extends Item { private String author; private int pages; public Book(String theTitle, String theAuthor, int nPages, String... keywords) { super(theTitle, keywords); this.author = theAuthor; this.pages = nPages; // this.keywords = keywords; } public String getAuthor() { return author; } //public String [] getKeywords() // { // return keywords; //} public void print() { System.out.println(toString()); } public String toString() { return "-Book-" + "\n" + "Author: " + author + "\n" + "# pages " + pages + "\n" + super.toString() // + "keywords: " + Arrays.toString(keywords) + "\n" + "\n" ; } } I hope i didnt confuse you? I need help with the itemsForKeyword(String keyword) function.. the first keyword being passed in is "science fiction" and i want to search the keywords in the sets and return the matches.. What am i doing so wrong? Thank you

    Read the article

  • AsyncPostBackTrigger Just flashing/flicking the UpdatePanel but not updating it

    - by Pankaj
    I am trying UpdatePanel & AsyncPostBackTrigger on master pages through find control method but problem is when I click on button (UpdateButton) It just flash/flick (No Postback) the UpdatePanle but still it don't update or refresh the gridview (images) inside the updatePanel. I have placed script Manger on the master page & an AJAX Update panel in a ContentPlaceHolder in the child page. Also, in another ContentPlaceholder there is an asp button (outside of the UpdatePanel). I want to refresh/reload the AJAX UpdatePanel with this asp button. Thanks for suggestions. Child Page Code :- protected void Page_Load(object sender, EventArgs e) { ScriptManager ScriptManager1 = (ScriptManager)Master.FindControl("ScriptManager1"); ContentPlaceHolder cph = (ContentPlaceHolder)Master.FindControl("cp_Button"); Button btnRefresh = (Button)cph.FindControl("btnRefresh"); ScriptManager1.RegisterAsyncPostBackControl(btnRefresh); } protected void btnRefresh_Click(object sender, EventArgs e) { UpdatePanel1.Update(); } <%@ Page Title="" Language="C#" MasterPageFile="~/InnerMaster.master" AutoEventWireup="true" CodeFile="A.aspx.cs" Inherits="A" Async="true" %> <asp:Content ID="Content3" ContentPlaceHolderID="MainContent" Runat="Server"> <asp:UpdatePanel ID="UpdatePanel1" UpdateMode="Conditional" runat="server"> <ContentTemplate> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="id" DataSourceID="SqlDataSource1"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:Image ID="img12" runat="server" Width="650px" Height="600" ToolTip="A" ImageUrl='<%# Page.ResolveUrl(string.Format("~/Cli/{0}", Eval("image"))) %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </ContentTemplate> </asp:UpdatePanel> </asp:Content> <asp:Content ID="Content4" ContentPlaceHolderID="cp_Button" Runat="Server"> <asp:Button ID ="btnRefresh" runat="server" onclick="btnRefresh_Click" Height="34" Width="110" Text="More Images" /> </asp:Content> Hi updated code :- Now on click event whole pages is refreshed. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; namespace EProxy { public class EventProxy : Control, IPostBackEventHandler { public EventProxy() { } public void RaisePostBackEvent(string eventArgument) { } public event EventHandler<EventArgs> EventProxied; protected virtual void OnEventProxy(EventArgs e) { if (this.EventProxied != null) { this.EventProxied(this, e); } } public void ProxyEvent(EventArgs e) { OnEventProxy(e); } } } On Master Page Code (btn click):- protected void btnRefresh_Click(object sender, EventArgs e) { ContentPlaceHolder cph = (ContentPlaceHolder)this.FindControl("MainContent"); EventProxy eventProxy = (EventProxy)cph.FindControl("ProxyControl") as EventProxy; eventProxy.ProxyEvent(e); } Web Config :- <pages maintainScrollPositionOnPostBack="true" enableViewStateMac="true"> <controls> <add tagPrefix="it" namespace="EProxy" assembly="App_Code"/> </controls> </pages>

    Read the article

  • Another Call to a member function get_segment() on a non-object question

    - by hogofwar
    I get the above error when calling this code: <? class Test1 extends Core { function home(){ ?> This is the INDEX of test1 <? } function test2(){ echo $this->uri->get_segment(1); //this is where the error comes from ?> This is the test2 of test1 testing URI <? } } ?> I get the error where commentated. This class extends this class: <?php class Core { public function start() { require("funk/funks/libraries/uri.php"); $this->uri = new uri(); require("funk/core/loader.php"); $this->load = new loader(); if($this->uri->get_segment(1) != "" and file_exists("funk/pages/".$this->uri->get_segment(1).".php")){ include("funk/pages/". $this->uri->get_segment(1).".php"); $var = $this->uri->get_segment(2); if ($var != ""){ $home= $this->uri->get_segment(1); $Index= new $home(); $Index->$var(); }else{ $home= $this->uri->get_segment(1); $Index = new $home(); $Index->home(); } }elseif($this->uri->get_segment(1) and ! file_exists("funk/pages/".$this->uri->get_segment(1).".php")){ echo "404 Error"; }else{ include("funk/pages/index.php"); $Index = new Index(); $Index->home(); //$this->Index->index(); echo "<!--This page was created with FunkyPHP!-->"; } } } ?> And here is the contents of uri.php: <?php class uri { private $server_path_info = ''; private $segment = array(); private $segments = 0; public function __construct() { $segment_temp = array(); $this->server_path_info = preg_replace("/\?/", "", $_SERVER["PATH_INFO"]); $segment_temp = explode("/", $this->server_path_info); foreach ($segment_temp as $key => $seg) { if (!preg_match("/([a-zA-Z0-9\.\_\-]+)/", $seg) || empty($seg)) unset($segment_temp[$key]); } foreach ($segment_temp as $k => $value) { $this->segment[] = $value; } unset($segment_temp); $this->segments = count($this->segment); } public function segment_exists($id = 0) { $id = (int)$id; if (isset($this->segment[$id])) return true; else return false; } public function get_segment($id = 0) { $id--; $id = (int)$id; if ($this->segment_exists($id) === true) return $this->segment[$id]; else return false; } } ?> i have asked a similar question to this before but the answer does not apply here. I have rewritten my code 3 times to KILL and Delimb this godforsaken error! but nooooooo....

    Read the article

  • web application with secured sections, sessions and related trouble

    - by spirytus
    I would like to create web application with admin/checkout sections being secured. Assuming I have SSL set up for subdomain.mydomain.com I would like to make sure that all that top-secret stuff ;) like checkout pages and admin section is transferred securely. Would it be ok to structure my application as below? subdomain.mydomain.com adminSectionFolder adminPage1.php adminPage2.php checkoutPagesFolder checkoutPage1.php checkoutPage2.php checkoutPage3.php homepage.php loginPage.php someOtherPage.php someNonSecureFolder nonSecurePage1.php nonSecurePage2.php nonSecurePage3.php imagesFolder image1.jpg image2.jpg image3.jpg Users would access my web application via http as there is no need for SSL for homepage and similar. Checkout/admin pages would have to be accessed via https though (that I would ensure via .htaccess redirects). I would also like to have login form on every page of the site, including non-secure pages. Now my questions are: if I have form on non-secure page e.g http://subdomain.mydomain.com/homepage.php and that form sends data to http://subdomain.mydomain.com/loginPage.php, is data being send encrypted as if it were sent from https://subdomain.mydomain.com/homepage.php? I do realize users will not see padlock, but browser still should encrypt it, is it right? If on secure page loginPage.php (or any other accessed via https for that instance) I created session, session ID would be assigned, and in case of my web app. something like username of the logged in user. Would I be able to access these session variable from http://subdomain.mydomain.com/homepage.php to for example display greeting message? If session ID is stored in cookies then it would be trouble I assume, but could someone clarify how it should be done? It seems important to have username and password send over SSL. Related to above question I think.. would it actually make any sense to have login secured via SSL so usenrame/password would be transferred securely, and then session ID being transferred with no SSL? I mean wouldnt it be the same really if someone caught username and password being transferred, or caught session ID? Please let me know if I make sense here cause it feels like I'm missing something important. EDIT: I came up with idea but again please let me know if that would work. Having above, so assuming that sharing session between http and https is as secure as login in user via plain http (not https), I guess on all non secure pages, like homepage etc. I could check if user is already logged in, and if so from php redirect to https version of same page. So user fills in login form from homepage.php, over ssl details are send to backend so probably https://.../homepage.php. Trying to access http://.../someOtherPage.php script would always check if session is created and if so redirect user to https version of this page so https://.../someOtherPage.php. Would that work? 4.To avoid browser popping message "this page contains non secure items..." my links to css, images and all assets, e.g. in case of http://subdomain.mydomain.com/checkoutPage1.php should be absolute so "/images/image1.jpg" or relative so "../images/image1.jpg"? I guess one of those would have to work :) wow that's long post, thanks for your patience if you got that far and any answers :) oh yeh and I use php/apache on shared hosting

    Read the article

  • SSRS 2008 printing single page renders different for print

    - by user270437
    I have a problem with SSRS 2008 reports rendering differently on the reporting server than the way it renders when you print the report. I’m trying to figure out to print a single page and have the print show that same records as I see on the report on the screen. As a test, I created a simple report with no headers or footers and just added a Tablix table to display the records (no groupings). My data set for this test displays 2 ¼ pages of records when I deploy it to our reporting services server and run it. If I click the print Icon and preview the report is 2 ¾ pages. I haven’t found anything searching on this so it makes me think it is something simple I’m missing. A basically want the report to render the same records on each page in Report Manager as it does when it prints, how do I accomplish this? (In response to answer posted by Chris)…If that is the case then it is disappointing. Customers are accustomed to WYSIWYG and will have a hard time understanding that, I imagine we will be getting a lot of support calls. This still leaves an issue. I tried using print preview and could not find any way to single out a page. If I select a page up front to print, or preview it renders different so I get different records. And if I preview the entire document, I can only print the entire document. You mentioned the Excel render; we have customers that will want that also. The problem I have found with Excel exports is that even a basic report winds up merging some cells and that messes up sorting. I’m going to try your tip about grouping to see if I can get a clean export to a page. It would have been nice if they would have created a property for certain controls like the tablix table called “ExcelSheet”. Then all you would have to do is give it a name and it would create a new sheet for each control with a name, the name becoming the sheet title. Thanks for the information you supplied it is very useful as I’m new to SSRS. If you know how I can Preview in print render and select individual pages to print from the render let me know. Update 02/19/2010 After testing this more I now realize it is just a bad design of Report managers print driver or a limitation because it is server based. The options work differently than Windows apps drivers, But I did find a work around. Here is the test I performed comparing Excel to Report Manager. I bring up a report that will render more than 1 page when printed. I then export to Excel, in Excel I select print preview. I can navigate the pages in preview and then select a single page like page 3. I can then print just page 3 without leaving print preview and it prints just like it rendered. I cannot do this using print in report manager. If I select print preview in report manager then try to print while in preview it always prints the entire document. However if I close out of print preview, I can then select page 3 and print it as rendered. It is just one additional step once you know what to do, but it took some time to figure it out.

    Read the article

  • how to version minder for web application data

    - by dankyy1
    hi all;I'm devoloping a web application which renders data from DB and also updates datas with editor UI Pages.So i want to implement a versioning mechanism for render pages got data over db again if only data on db updated by editor pages.. I decided to use Session objects for the version information that client had taken latestly.And the Application object that the latest DB version of objects ,i used the data objects guid as key for each data item client version holder class like below ItemRunnigVersionInformation class holds currentitem guid and last loadtime from DB public class ClientVersionManager { public static List<ItemRunnigVersionInformation> SessionItemRunnigVersionInformation { get { if (HttpContext.Current.Session["SessionItemRunnigVersionInformation"] == null) HttpContext.Current.Session["SessionItemRunnigVersionInformation"] = new List<ItemRunnigVersionInformation>(); return (List<ItemRunnigVersionInformation>)HttpContext.Current.Session["SessionItemRunnigVersionInformation"]; } set { HttpContext.Current.Session["SessionItemRunnigVersionInformation"] = value; } } /// <summary> /// this will be updated when editor pages /// </summary> /// <param name="itemRunnigVersionInformation"></param> public static void UpdateItemRunnigSessionVersion(string itemGuid) { ItemRunnigVersionInformation itemRunnigVersionAtAppDomain = PlayListVersionManager.GetItemRunnigVersionInformationByID(itemGuid); ItemRunnigVersionInformation itemRunnigVersionInformationAtSession = SessionItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemGuid); if ((itemRunnigVersionInformationAtSession == null) && (itemRunnigVersionAtAppDomain != null)) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(SessionItemRunnigVersionInformation, itemRunnigVersionAtAppDomain); } else if (itemRunnigVersionAtAppDomain != null) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Remove(SessionItemRunnigVersionInformation, itemRunnigVersionInformationAtSession); ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(SessionItemRunnigVersionInformation, itemRunnigVersionAtAppDomain); } } /// <summary> /// by given parameters check versions over PlayListVersionManager versions and /// adds versions to clientversion manager if any item version on /// playlist not found it will also added to PlaylistManager list /// </summary> /// <param name="playList"></param> /// <param name="userGuid"></param> /// <param name="ownerGuid"></param> public static void UpdateCurrentSessionVersion(PlayList playList, string userGuid, string ownerGuid) { ItemRunnigVersionInformation tmpItemRunnigVersionInformation; List<ItemRunnigVersionInformation> currentItemRunnigVersionInformationList = new List<ItemRunnigVersionInformation>(); if (!string.IsNullOrEmpty(userGuid)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(userGuid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(userGuid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } if (!string.IsNullOrEmpty(ownerGuid)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(ownerGuid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(ownerGuid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } if ((playList != null) && (playList.PlayListItemCollection != null)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(playList.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(playList.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } currentItemRunnigVersionInformationList.Add(tmpItemRunnigVersionInformation); foreach (PlayListItem playListItem in playList.PlayListItemCollection) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(playListItem.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(playListItem.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } currentItemRunnigVersionInformationList.Add(tmpItemRunnigVersionInformation); foreach (SoftKey softKey in playListItem.PlayListSoftKeys) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(softKey.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(softKey.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } foreach (MenuItem menuItem in playListItem.MenuItems) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(menuItem.Guid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(menuItem.Guid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } } } SessionItemRunnigVersionInformation = currentItemRunnigVersionInformationList; } public static ItemRunnigVersionInformation GetItemRunnigVersionInformationById(string itemGuid) { return SessionItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemGuid); } public static void DeleteItemRunnigAppDomain(string itemGuid) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Remove(SessionItemRunnigVersionInformation, NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(SessionItemRunnigVersionInformation, itemGuid)); } } and that was for server one public class PlayListVersionManager { public static List<ItemRunnigVersionInformation> AppDomainItemRunnigVersionInformation { get { if (HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] == null) HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] = new List<ItemRunnigVersionInformation>(); return (List<ItemRunnigVersionInformation>)HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"]; } set { HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] = value; } } public static ItemRunnigVersionInformation GetItemRunnigVersionInformationByID(string itemGuid) { return ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemGuid); } /// <summary> /// this will be updated when editor pages /// if any record at playlistversion is found it will be addedd /// </summary> /// <param name="itemRunnigVersionInformation"></param> public static void UpdateItemRunnigAppDomainVersion(ItemRunnigVersionInformation itemRunnigVersionInformation) { ItemRunnigVersionInformation itemRunnigVersionInformationAtAppDomain = NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation.ItemGuid); if (itemRunnigVersionInformationAtAppDomain == null) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); } else { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Remove(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformationAtAppDomain); ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); } } //this will be checked each time if needed to update item over DB public static bool IsRunnigItemLastVersion(ItemRunnigVersionInformation itemRunnigVersionInformation, bool ignoreNullEntry, out bool itemNotExistsAtAppDomain) { itemNotExistsAtAppDomain = false; if (itemRunnigVersionInformation != null) { ItemRunnigVersionInformation itemRunnigVersionInformationAtAppDomain = AppDomainItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemRunnigVersionInformation.ItemGuid); itemNotExistsAtAppDomain = (itemRunnigVersionInformationAtAppDomain == null); if (itemNotExistsAtAppDomain && (ignoreNullEntry)) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); return true; } else if (!itemNotExistsAtAppDomain && (itemRunnigVersionInformationAtAppDomain.LastLoadTime <= itemRunnigVersionInformation.LastLoadTime)) return true; else return false; } else return ignoreNullEntry; } public static void DeleteItemRunnigAppDomain(string itemGuid) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Remove(AppDomainItemRunnigVersionInformation, NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemGuid)); } } when more than one client requests the page i got "Collection was modified; enumeration operation may not execute." like below.. xception: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) at System.Collections.Generic.List1.Enumerator.MoveNextRare() at System.Collections.Generic.List1.Enumerator.MoveNext() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable1 source, Func2 predicate) at NG.IPTOffice.Paloma.Helper.PlayListVersionManager.UpdateItemRunnigAppDomainVersion(ItemRunnigVersionInformation itemRunnigVersionInformation) in at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.playlistwebform_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\ipservicestest\8921e5c8\5d09c94d\App_Web_n4qdnfcq.2.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)----------- how to implement version management like this scnerio? how can i to avoid this exception? thnx

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • Professional WordPress Business Themes

    - by Matt
    Every now and then JustSkins.com receives quote requests for WordPress design for business websites. Most companies now keep up to date with a blog on their corporate website, that showcases their day to day activities & progresses.  Getting such professional wordpress driven website designed from the scratch costs you a lot. If you have decided to make WordPress the CMS for your business website, there are some Professional WordPress themes you can take a look at. We have created this list to help you save some time to do all the trying and the testing. Optimize by WooThemes Last year one of the most popular Business theme by WooThemes was the Coffee Break theme, Optimize is further adaptation of the same. It is simple, sleek design with great functionality. The customizable front page lets you showcase your work or product etc. Demo | Price: $70, Developer Price: $150 | DOWNLOAD WooThemes is also offering their whole Business theme pack for a very very reasonable fee, If you like multiple designs from them you can get this big deal for only $125 Onyx , Impacto by Simple Themes Simple Themes has been making very crisp & beautiful WordPress Themes & are also very reasonably priced. If their themes solve your purpose $39 membership for 3 months is a good deal.  If you are looking to create quick website, landing page or micro site their templates are best. Demo | Price: $39 for 3 Months Membership Rejuvenate by Templatic One of the most beautiful Premium WordPress Theme, Available in 4 elegant color schemes. This theme can be used for your Beauty, Spa and Studio Business. Demo | Price: $65  | DOWNLOAD Templatic has created great professional business templates, such as Gourmet, Real Estate, Job Board, Automobile & lots More. You can also get a Best Value Offer in $299 for all of Templatic Themes. TheProfessional by ElegantThemes Elegant Themes is known to provide very beautiful & straightforward designs. The professional wordpress theme is a simple, crisp & concise Theme you can use to create a business website. The 3 short blurbs on the homepage are simple, which can be used to point them to your major offerings and the prominent slider indicates a clear call to action. There are 52 themes to choose from & Elegant Themes is giving a great offer at such a small yearly fee. Demo | Price: $39 Yearly Membership  | DOWNLOAD Elegant Themes has a cluster of 52 magnificent themes, and all you have to do is pay $39 to win access to all of them. Join today! Some of the Professional designs that I like for a business website are SimplePress and Corporation. Extatic by Chimera Themes The theme includes plenty of great features including custom feature tour pages, portfolio sections, static feature areas, pricing table page, 20+ shortcodes, multiple page/post options, unlimited custom sidebars which can be assigned to posts/pages, advanced theme style editor and options page and much more. Its a must buy Demo | Price: $37 | DOWNLOAD Corporate by Clover Themes Simple Theme for a small business. Corporate is an clean, powerful and feature-rich corporate theme with dynamic and energy design. Demo | Price: $69.95 | DOWNLOAD Bizco by Themify Bizco is a very professional template for wordpress targeted at corporate and product based businesses. This theme is simple yet highly functional and is suitable for showcasing features of your service or product. With the custom page template you can change the display of your pages and posts easily with our visual custom panel. Demo | Price: $70  |DOWNLOAD Devision by Themetrust Devision is a small business wordpress theme that can be used to make a business website within a few minutes. It makes it very easy to showcase and highlight your services or product on the homepage. Demo | Price: Euro 39 | DOWNLOAD BizPress by WPZoom A professional business WordPress theme from WPZoom suitable for companies, organizations, product showcases or other business websites. The theme comes with 4 colour options, featured products / services slider on the homepage, drop down menus, theme options page etc. Demo | Price: $ 69 | DOWNLOAD Clean Classy Corporate by ThemeFuse A very impressive WordPress business theme, that can be used in multiple ways. It is suitable for many kinds, like web products, services, hosting etc etc. Clean Classy Corporate WordPress Theme has a clean crisp look and is professional in appeal. Demo | Price: $49  | DOWNLOAD Insdustry by ThemeJam A powerful Business WordPress Template along with lots of options, colors, and customizable features. This is one for almost any kind of blogger, corporate, or organization. Lots of features, gives it the kind of scalability you might need to create any kind of website. Demo | Price: $ 59 | DOWNLOAD AppPress by ChimeraThemes This professional business WordPress theme includes 5 different colour schemes, advanced theme options page, multiple homepage sliders, custom widgets and page templates. The theme also includes a range of other unique features such as custom title, live style editor to modify colours, font styles, sizes etc, and 20+ shortcodes for creating pricing tables, content columns, boxes, buttons and others. Demo | Price: $ 37 | DOWNLOAD Why WordPress Professional Template? You can modify them, these usually come with a lot of fancy features that enable you to create the website as per your usability & choice. In some cases the  Premium WordPress business themes can be accessed through a subscription service. Premium Vs Free WordPress Themes There are very good Free WordPress themes out there that you can use to modify and code further or create what you want, but this possible when you are technically able. On the contrary Premium WordPress business themes offers great features & can save you a lot of time and money. It varies from business to business, some like to keep their website simple while most want to keep cool nifty features and abilities to scale it differently for various sections, products or categories. All this & more is possible with a Professional Business theme that is suitable/close to your needs.

    Read the article

  • List of blogs - year 2010

    - by hajan
    This is the last day of year 2010 and I would like to add links to all blogs I have posted in this year. First, I would like to mention that I started blogging in ASP.NET Community in May / June 2010 and have really enjoyed writing for my favorite technologies, such as: ASP.NET, jQuery/JavaScript, C#, LINQ, Web Services etc. I also had great feedback either through comments on my blogs or in Twitter, Facebook, LinkedIn where I met many new experts just as a result of my blog posts. Thanks to the interesting topics I have in my blog, I became DZone MVB. Here is the list of blogs I made in 2010 in my ASP.NET Community Weblog: (newest to oldest) Great library of ASP.NET videos – Pluralsight! NDepend – Code Query Language (CQL) NDepend tool – Why every developer working with Visual Studio.NET must try it! jQuery Templates in ASP.NET - Blogs Series jQuery Templates - XHTML Validation jQuery Templates with ASP.NET MVC jQuery Templates - {Supported Tags} jQuery Templates – tmpl(), template() and tmplItem() Introduction to jQuery Templates ViewBag dynamic in ASP.NET MVC 3 - RC 2 Today I had a presentation on "Deep Dive into jQuery Templates in ASP.NET" jQuery Data Linking in ASP.NET How do you prefer getting bundles of technologies?? Case-insensitive XPath query search on XML Document in ASP.NET jQuery UI Accordion in ASP.NET MVC - feed with data from database (Part 3) jQuery UI Accordion in ASP.NET WebForms - feed with data from database (Part 2) jQuery UI Accordion in ASP.NET – Client side implementation (Part 1) Using Images embedded in Project’s Assembly Macedonian Code Camp 2010 event has finished successfully Tips and Tricks: Deferred execution using LINQ Using System.Diagnostics.Stopwatch class to measure the elapsed time Speaking at Macedonian Code Camp 2010 URL Routing in ASP.NET 4.0 Web Forms Conflicts between ASP.NET AJAX UpdatePanels & jQuery functions Integration of jQuery DatePicker in ASP.NET Website – Localization (part 3) Why not to use HttpResponse.Close and HttpResponse.End Calculate Business Days using LINQ Get Distinct values of an Array using LINQ Using CodeRun browser-based IDE to create ASP.NET Web Applications Using params keyword – Methods with variable number of parameters Working with Code Snippets in VS.NET  Working with System.IO.Path static class Calculating GridView total using JavaScript/JQuery The new SortedSet<T> Collection in .NET 4.0 JavaScriptSerializer – Dictionary to JSON Serialization and Deserialization Integration of jQuery DatePicker in ASP.NET Website – JS Validation Script (part 2) Integration of jQuery DatePicker in ASP.NET Website (part 1) Transferring large data when using Web Services Forums dedicated to WebMatrix Microsoft WebMatrix – Short overview & installation Working with embedded resources in Project's assembly Debugging ASP.NET Web Services Save and Display YouTube Videos on ASP.NET Website Hello ASP.NET World... In addition, I would like to mention that I have big list of blog posts in CodeASP.NET Community (total 60 blogs) and the local MKDOT.NET Community (total 61 blogs). You may find most of my weblogs.asp.net/hajan blogs posted there too, but there you can find many others. In my blog on MKDOT.NET Community you can find most of my ASP.NET Weblog posts translated in Macedonian language, some of them posted in English and some other blogs that were posted only there. By reading my blogs, I hope you have learnt something new or at least have confirmed your knowledge. And also, if you haven't, I encourage you to start blogging and share your Microsoft Tech. thoughts with all of us... Sharing and spreading knowledge is definitely one of the noblest things which we can do in our life. "Give a man a fish and he will eat for a day. Teach a man to fish and he will eat for a lifetime" HAPPY NEW 2011 YEAR!!! Best Regards, Hajan

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Testing Mobile Websites with Adobe Shadow

    - by dwahlin
    It’s no surprise that mobile development is all the rage these days. With all of the new mobile devices being released nearly every day the ability for developers to deliver mobile solutions is more important than ever. Nearly every developer or company I’ve talked to recently about mobile development in training classes, at conferences, and on consulting projects says that they need to find a solution to get existing websites into the mobile space. Although there are several different frameworks out there that can be used such as jQuery Mobile, Sencha Touch, jQTouch, and others, how do you test how your site renders on iOS, Android, Blackberry, Windows Phone, and the variety of mobile form factors out there? Although there are different virtual solutions that can be used including Electric Plum for iOS, emulators, browser plugins for resizing the laptop/desktop browser, and more, at some point you need to test on as many physical devices as possible. This can be extremely challenging and quite time consuming though especially when you consider that you have to manually enter URLs into devices and click links on each one to drill-down into sites. Adobe Labs just released a product called Adobe Shadow (thanks to Kurt Sprinzl for letting me know about it) that significantly simplifies testing sites on physical devices, debugging problems you find, and even making live modifications to HTML and CSS content while viewing a site on the device to see how rendering changes. You can view a page in your laptop/desktop browser and have it automatically pushed to all of your devices without actually touching the device (a huge time saver). See a problem with a device? Locate it using the free Chrome extension, pull up inspection tools (based on the Chrome Developer tools) and make live changes through Chrome that appear on the respective device so that it’s easy to identify how problems can be resolved. I’ve been using Adobe Shadow and am very impressed with the amount of time saved and the different features that it offers. In the rest of the post I’ll walk through how to get it installed, get it started, and use it to view and debug pages.   Getting Adobe Shadow Installed The following steps can be used to get Adobe Shadow installed: 1. Download and install Adobe Shadow on your laptop/desktop 2. Install the Adobe Shadow extension for Chrome 3. Install the Adobe Shadow app on all of your devices (you can find it in various app stores) 4. Connect your devices to Wifi. Make sure they’re on the same network that your laptop/desktop machine is on   Getting Adobe Shadow Started Once Adobe Shadow is installed, you’ll need to get it running on your laptop/desktop and on all your mobile devices. The following steps walk through that process: 1. Start the Adobe Shadow application on your laptop/desktop 2. Start the Adobe Shadow app on each of your mobile devices 3. Locate the laptop/desktop name in the list that’s shown on each mobile device: 4. Select the laptop/desktop name and a passcode will be shown: 5. Open the Adobe Shadow Chrome extension on the laptop/desktop and enter the passcode for the given device: Using Adobe Shadow to View and Modify Pages Once Adobe Shadow is up and running on your laptop/desktop and on all of your mobile devices you can navigate to a page in Chrome on the laptop/desktop and it will automatically be pushed out to all connected mobile devices. If you have 5 mobile devices setup they’ll all navigate to the page displayed in Chrome (pretty awesome!). This makes it super easy to see how a given page looks on your iPad, Android device, etc. without having to touch the device itself. If you find a problem with a page on a device you can select the device in the Chrome Adobe Shadow extension on your laptop/desktop and select the remote inspector icon (it’s the < > icon): This will pull up the Adobe Shadow remote debugging window which contains the standard Chrome Developer tool tabs such as Elements, Resources, Network, etc. Click on the Elements tab to see the HTML rendered for the target device and then drill into the respective HTML content, CSS styles, etc. As HTML elements are selected in the Adobe Shadow debugging tool they’ll be highlighted on the device itself just like they would if you were debugging a page directly in Chrome with the developer tools. Here’s an example from my Android device that shows how the page looks on the device as I select different HTML elements on the laptop/desktop: Conclusion I’m really impressed with what I’ve to this point from Adobe Shadow. Controlling pages that display on devices directly from my laptop/desktop is a big time saver and the ability to remotely see changes made through the Chrome Developer Tools (on my laptop/desktop) really pushes the tool over the top. If you’re developing mobile applications it’s definitely something to check out. It’s currently free to download and use. For additional details check out the video below:  

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    - by smisner
    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online. When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space. Beginning the Investigation In the situation that I was troubleshooting, I checked the properties: Item Property Value Body Height 6.25in   Width 10.5in Page Header Height 1in Page Footer Height 0.25in Report Left Margin 0.1in   Right Margin 0.1in   Top Margin 0.05in   Bottom Margin 0.05in   Page Size - Height 8.5in   Page Size - Width 11in So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit. Conducting the Third Degree My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table. In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on? Taking the Suspect into Custody My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing. When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below. Closing the Case Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • ASP.NET Web Forms Extensibility: Handler Factories

    - by Ricardo Peres
    An handler factory is the class that implements IHttpHandlerFactory and is responsible for instantiating an handler (IHttpHandler) that will process the current request. This is true for all kinds of web requests, whether they are for ASPX pages, ASMX/SVC web services, ASHX/AXD handlers, or any other kind of file. Also used for restricting access for certain file types, such as Config, Csproj, etc. Handler factories are registered on the global Web.config file, normally located at %WINDIR%\Microsoft.NET\Framework<x64>\vXXXX\Config for a given path and request type (GET, POST, HEAD, etc). This goes on section <httpHandlers>. You would create a custom handler factory for a number of reasons, let me list just two: A centralized place for using dependency injection; Also a centralized place for invoking custom methods or performing some kind of validation on all pages. Let’s see an example using Unity for injecting dependencies into a page, suppose we have this on Global.asax.cs: 1: public class Global : HttpApplication 2: { 3: internal static readonly IUnityContainer Unity = new UnityContainer(); 4: 5: void Application_Start(Object sender, EventArgs e) 6: { 7: Unity.RegisterType<IFunctionality, ConcreteFunctionality>(); 8: } 9: } We instantiate Unity and register a concrete implementation for an interface, this could/should probably go in the Web.config file. Forget about its actual definition, it’s not important. Then, we create a custom handler factory: 1: public class UnityPageHandlerFactory : PageHandlerFactory 2: { 3: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 4: { 5: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 6: 7: //one scenario: inject dependencies 8: Global.Unity.BuildUp(handler.GetType(), handler, String.Empty); 9:  10: return (handler); 11: } 12: } It inherits from PageHandlerFactory, which is .NET’s included factory for building regular ASPX pages. We override the GetHandler method and issue a call to the BuildUp method, which will inject required dependencies, if any exist. An example page with dependencies might be: 1: public class SomePage : Page 2: { 3: [Dependency] 4: public IFunctionality Functionality 5: { 6: get; 7: set; 8: } 9: } Notice the DependencyAttribute, it is used by Unity to identify properties that require dependency injection. When BuildUp is called, the Functionality property (or any other properties with the DependencyAttribute attribute) will receive the concrete implementation associated with it’s type, as registered on Unity. Another example, checking a page for authorization. Let’s define an interface first: 1: public interface IRestricted 2: { 3: Boolean Check(HttpContext ctx); 4: } An a page implementing that interface: 1: public class RestrictedPage : Page, IRestricted 2: { 3: public Boolean Check(HttpContext ctx) 4: { 5: //check the context and return a value 6: return ...; 7: } 8: } For this, we would use an handler factory such as this: 1: public class RestrictedPageHandlerFactory : PageHandlerFactory 2: { 3: private static readonly IHttpHandler forbidden = new UnauthorizedHandler(); 4:  5: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 6: { 7: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 8: 9: if (handler is IRestricted) 10: { 11: if ((handler as IRestricted).Check(context) == false) 12: { 13: return (forbidden); 14: } 15: } 16:  17: return (handler); 18: } 19: } 20:  21: public class UnauthorizedHandler : IHttpHandler 22: { 23: #region IHttpHandler Members 24:  25: public Boolean IsReusable 26: { 27: get { return (true); } 28: } 29:  30: public void ProcessRequest(HttpContext context) 31: { 32: context.Response.StatusCode = (Int32) HttpStatusCode.Unauthorized; 33: context.Response.ContentType = "text/plain"; 34: context.Response.Write(context.Response.Status); 35: context.Response.Flush(); 36: context.Response.Close(); 37: context.ApplicationInstance.CompleteRequest(); 38: } 39:  40: #endregion 41: } The UnauthorizedHandler is an example of an IHttpHandler that merely returns an error code to the client, but does not cause redirection to the login page, it is included merely as an example. One thing we must keep in mind is, there can be only one handler factory registered for a given path/request type (verb) tuple. A typical registration would be: 1: <httpHandlers> 2: <remove path="*.aspx" verb="*"/> 3: <add path="*.aspx" verb="*" type="MyNamespace.MyHandlerFactory, MyAssembly"/> 4: </httpHandlers> First we remove the previous registration for ASPX files, and then we register our own. And that’s it. A very useful mechanism which I use lots of times.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >