Search Results

Search found 28992 results on 1160 pages for 'content pages'.

Page 29/1160 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Creating Dynamic Web Content

    The most important part of your website is the quality of your content. Without good content you will not get the page ranking in the search engines that you need to get to be successful.

    Read the article

  • Creating Dynamic Web Content

    The most important part of your website is the quality of your content. Without good content you will not get the page ranking in the search engines that you need to get to be successful.

    Read the article

  • Using real fonts in HTML 5 & CSS 3 pages

    - by nikolaosk
    This is going to be the fifth post in a series of posts regarding HTML 5. You can find the other posts here, here , here and here.In this post I will provide a hands-on example on how to use real fonts in HTML 5 pages with the use of CSS 3.Font issues have been appearing in all websites and caused all sorts of problems for web designers.The real problem with fonts for web developers until now was that they were forced to use only a handful of fonts.CSS 3 allows web designers not to use only web-safe fonts.These fonts are in wide use in most user's operating systems.Some designers (when they wanted to make their site stand out) resorted in various techniques like using images instead of fonts. That solution is not very accessible-friendly and definitely less SEO friendly.CSS (through CSS3's Fonts module) 3 allows web developers to embed fonts directly on a web page.First we need to define the font and then attach the font to elements.Obviously we have various formats for fonts. Some are supported by all modern browsers and some are not.The most common formats are, Embedded OpenType (EOT),TrueType(TTF),OpenType(OTF). I will use the @font-face declaration to define the font used in this page.  Before you download fonts (in any format) make sure you have understood all the licensing issues. Please note that all these real fonts will be downloaded in the client's computer.A great resource on the web (maybe the best) is http://www.typekit.com/.They have an abundance of web fonts for use. Please note that they sell those fonts.Another free (best things in life a free, aren't they?) resource is the http://www.google.com/webfonts website. I have visited the website and downloaded the Aladin webfont.When you download any font you like make sure you read the license first. Aladin webfont is released under the Open Font License (OFL) license. Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for web fonts from the latest versions of modern browsers.Please have a look at the picture below. We see that all the latest versions of modern browsers support this feature. In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I create a simple HTML 5 page. The markup follows and it is very easy to use and understand<!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>        <div id="main">          <h2>HTML 5</h2>                        <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>      </div>             </body>  </html> Then I create the style.css file.<style type="text/css">@font-face{font-family:Aladin;src: url('Aladin-Regular.ttf')}h1{font-family:Aladin,Georgia,serif;}</style> As you can see we want to style the h1 tag in our HTML 5 markup.I just use the @font-face property,specifying the font-family and the source of the web font. Then I just use the name in the font-family property to style the h1 tag.Have a look below to see my page in IE10. Make sure you open this page in all your browsers installed in your machine. Make sure you have downloaded the latest versions. Now we can make our site stand out with web fonts and give it a really unique look and feel. Hope it helps!!!  

    Read the article

  • HTTPS on all pages where user is logged on

    - by Tom Gullen
    I know this is considered best practise to prevent cookie hijacking. I would like to adopt this approach, but ran across a problem on our forum where the users post images which either aren't posted with URL's over HTTPS or the url itself doesn't support HTTPS. This throws up a lot of ugly browser warnings. I see I have two options: Disable HTTPS for the forum Force all user posted content to start with // in the url so it selects the right protocol, if it doesn't support HTTPS so be it Do I have any other options? How do other sites deal with this?

    Read the article

  • Limit JavaScript and CSS files on ASP.NET MVC 2 Master Page based on Model and View content

    - by Zack Peterson
    I want to include certain .js and .css files only on pages that need them. For example, my EditorTemplate DateTime.ascx needs files anytimec.js and anytimec.css. That template is applied whenever I use either the EditorFor or EditorForModel helper methods in a view for a model with a DateTime type value. I've put this condition into the <head> section of my master page. It checks for a DateTime type property in the ModelMetadata. <% if (this.ViewData.ModelMetadata.Properties.Any(p => p.ModelType == typeof(DateTime))) { %> <link href="../../Content/anytimec.css" rel="stylesheet" type="text/css" /> <script src="../../Scripts/anytimec.js" type="text/javascript"></script> <% } %> This has two problems: Fails if I have nested child models of type DateTime Unnecessarily triggered by views without EditorFor or EditorForModel methods (example: DisplayForModel) How can I improve this technique?

    Read the article

  • Best practice for SEO "special characters" in products pages

    - by rhodesit
    Whats a best practice for creating websites do to the fact that i need to enter "ö" within the content/title/meta. Should I spell it without, and just use a "normal" character or do i put in this code everywhere. or do i spell it half the time with and half the time without. whats the best practice for seo? Google takes into account user intent. Which makes things complicated(in my mind). The user will be searching without the "special characters" but because of the whole "user intent" thing, I don't know the best practice for this situation is. Should I use a mix of both spellings? Should I use the special characters in anchortext/headers/title/metadescription?

    Read the article

  • Google not showing any pages from my site in the index after three months [on hold]

    - by Alex Coisman
    Despite having a sitemap and using Google Webmaster Tools, it has been over 3 months and my site has not been added to the Google index at all. Here's the site: www.famouslefthandedpeople.com As far as I know, I have done everything correctly. However, there must be something I am overlooking that is preventing Google from indexing the site. I do not have a robots.txt file, so allow/disallow isn't the issue. Although the content of the site is sparse, it is original and not duplicated internally or externally so Panda/Penguin should not be a problem. I have reviewed the answers at Why isn't my website in Google search results? and I don't think it applies here. If it matters, I am using WordPress to create the site. What other factors should I be looking at in order to troubleshoot this?

    Read the article

  • Improving FAQ SEO with multiple pages?

    - by asdfasdf
    I have a client who has over 200 Question/Answer style content blocks. Neither the questions or answers are very long and most of them have almost the same question but with a word or two differentiating themselves from the rest of the questions. Would SEO be helped or hurt if I would to put each QA on its own page with the title of the page the question being asked etc... Or, would that be considered "farming"? If not, what would be the best way (in SEO world) do present all these QAs? Thanks for any advice..

    Read the article

  • How to calculate Content-Length for a file download within Kohana PHP?

    - by moritzd
    I'm trying to send a file from within a Kohana model to the browser, but as soon as I add a Content-Length header, the file doesn't start downloading right away. Now the problem seems to be that Kohana is already outputting buffers. An ob_clean at the begin of the script doesn't help this though. Also adding ob_get_length() to the Content-Length isn't helping since this just returns 0. The getFileSize() function returns the right number: if I run the script outside of Kohana, it works. I read that exit() still calls all destructors and it might be that something is outputted by Kohana afterwards, but I can't find out what exactly. Hope someone can help me out here... This is the piece of code I'm using: public function download() { header("Expires: ".gmdate("D, d M Y H:i:s",time()+(3600*7))." GMT\n"); header("Content-Type: ".$this->getFileType()."\n"); header("Content-Transfer-Encoding: binary\n"); header("Last-Modified: " . gmdate("D, d M Y H:i:s",$this->getCreateTime()) . " GMT\n"); header("Content-Length: ".($this->getFileSize()+ob_get_length().";\n"); header('Content-Disposition: attachment; filename="'.basename($this->getFileName())."\"\n\n"); ob_end_flush(); readfile($this->getFilePath()); exit(); }

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by user38511
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Disabling the Squid Error pages

    - by Nicholas Smith
    I've just started looking at using Squid for a project and can't seem to see an easy way of disabling the Squid error pages (e.g. "Name Error: The domain name does not exist"). We use a custom browser which handles that scenario in our way, so the Squid error pages are overriding our custom logic. Is it possible to set them too 'off'? I've been through the .conf and I've found where the error pages are stored, but no real options to disable them.

    Read the article

  • How to export a brochure into PDF with each layout as a page?

    - by nhj
    I have created a 3-fold brochure in Mac iWork - Pages. If I print the brochure then I can 3-fold it and everything is fine. But if I want to export as PDF then I get a 2-A4 size pages, and this distorts the user the order of the pages, I would like to export each layout as a separate page. The 'Layout Break' option is diabled and I don't know how to enable it? Any ideas? Thanks.

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by phisch
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • Index fragmentation and reorganizing database pages

    - by TiQ
    Say you have a database with heavy index fragmentation. Say this database also has a lot of free space due to frequent deletes in its data file. This free space is not contiguous. If I rebuild all indexes to remove fragmentation and then reorganize the database pages so allocated pages and free pages are contiguous, would this cause further fragmentation in my indexes? I guess the question can be posed as: if it matters, which should I do first, reorganize or rebuild?

    Read the article

  • Man pages in Linux

    - by Ayos
    I don't seem to have all the man pages that I need. For example, my college computers (running Fedora 14) have man pages for ASCII, all the standard C libraries (stdlib.h, stdio.h) and so on and so forth. I wish to "install" these pages, after looking up on the Internet I couldn't really find anything that made sense. How can I get, say, the man-page of ASCII (I know it's not really a command or a daemon or anything like that, but typing man ASCII on the college computer gets me a page with the ASCII value table + a little more information). I don't want to keep using the Internet for looking up man pages every time I need to look up a function, function prototype or the ASCII table or something like that.

    Read the article

  • Problem resolving many of the Web Pages

    - by Aditya
    I am presently running Ubuntu 12.04 and using Chrome/Firefox along with OpenDNS (have tried Google Public DNS as well as DNS of my ISP). Suddenly, a lot of websites that I visit frequently don't load anymore. Some of them are imgur, yahoo, fed-sudoku, microsoft and addons page of firefox. I am sure there are many more that won't load. I have Windows 7 in Dual-Boot and there are no problems whatsoever in opening these pages on Windows. A little History: 2 weeks back I installed Ubuntu 12.10. I faced this issue right-away on Ubuntu 12.10. I thought something must have went wrong with the installation, so I removed Ubuntu 12.10 and instead installed Lubuntu 12.10 on it. But the same issue persisted on Lubuntu as well. So, I tried opening these webpages in Live Environments (of Ubuntu 12.10, Lubuntu 12.10 and Ubuntu 12.04.1) from USB. The issue was there for Ubuntu 12.10 and Lubuntu 12.10. However, I was able to access these webpages from Ubuntu 12.04.1. So, I installed 12.04.1 on my HardDisk. Everything on 12.04 was fine till yesterday; but suddenly, these sites don't load anymore. All this while, Windows 7 in Dual-Boot works flawlessly. Please help me resolve this issue.

    Read the article

  • WordPress page title repeated in SOME pages

    - by cmykrgbb
    I have created a Wordpress site and titles were working just fine. Then, some time and plugins installed later, I noticed that in SOME pages I get the title repeated 2 times. Example of wrong page title: Contact - NAME | NAME Example of normal title: Our Services | NAME Now, if I go to General Settings and change title it will change both, no improvement. SEO by Yoast has the option to reset page titles, but that just removes all titles leaving the current URL as page title, so no good either. Here is the code I originally had: <title><?php wp_title(''); ?><?php if(wp_title('', false)) { echo ' | '; } ?><?php bloginfo('name'); ?></title> Here is the code I am using now: <title><?php wp_title('|'); ?></title> To sum up, I think somewhere in the database there's a wp_title repeated: once using '-' as separator, another one (the current one) using '|'. Any help will be most appreciated, thanks!

    Read the article

  • create previous next button for iframe pages

    - by Resu
    This topic may have lots of code out there, BUT I seem to be looking for a variation that isn't based on history, is it possible... So I have this code... <script type="text/javascript"> var pages=new Array(); pages[0]="listItem1.html"; pages[1]="listItem2.html"; pages[2]="listItem3.html"; pages[3]="listItem4.html"; pages[4]="listItem5.html"; var i=0; var end=pages.length; end--; function changeSrc(operation) { if (operation=="next") { if (i==end) { document.getElementById('the_iframe').src=pages[end]; i=0;} else { document.getElementById('the_iframe').src=pages[i]; i++;}} if (operation=="back") { if (i==0) { document.getElementById('the_iframe').src=pages[0]; i=end;} else { document.getElementById('the_iframe').src=pages[i]; i--;}}} </script> </head> <body> <ul id="menu" role="group"> <li><a href="listItem1.html" target="ifrm" role="treeitem">Welcome</a> <ul> <li><a href="listItem2.html" target="ifrm" role="treeitem">Ease of Access Center</a></li> </ul> </li> <li><a href="listItem3.html" target="ifrm">Getting Started</a> <ul> <li><a href="listItem4.html" target="ifrm">Considerations</a></li> <li><a href="listItem5.html" target="ifrm">Changing Perspective</a></li> </ul> </li> </ul> <iframe id="the_iframe" scrolling="no" src="listItem1.htm" name="ifrm" style="width:540px;></iframe> <input type="button" onClick="changeSrc('back');" value="Back" /> <input type="button" onClick="changeSrc('next');" value="Next" /> and if I click on the next or prev button, it does move somewhere,but... let's say my iframe is showing listItem2, then I click on listItem4 in the menu (there is a tree menu involved), then I want to go to listItem3 and I hit the back button...instead of going to listItem3, it goes to listItem2 (or someplace that is not back a page from 4 to 3). It appears that the buttons are navigating based on history?...but I just want a straight forward or backward movement...I don't want my buttons to have this browser-type functionality...If I'm on listItem4 and hit the next button, I want it to go to listItem5. Many Thanks For Any Help!

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >