Search Results

Search found 21060 results on 843 pages for 'content disposition'.

Page 7/843 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

  • Content management recommendations for website?

    - by Travis
    Hello I am working on a website that has a wide range of content. (News, FAQs, tutorials, blog, articles, product pages etc.) Currently a lot of this content is static or uses special-purpose scripts. I would like to move most of it under the wing of a single content manager. I have not used out of the box content management software previously so am hoping for some recommendations on what options there are and what might be best suited to a project like this. Whether the manager is open source or commercial, and what language it is written in, are not so important. I can customize the environment as necessary. The most important things are: 1) The ability to manage a wide variety of content. 2) The ability to create highly customized templates for a single page of content or entire category of content. 3) Flexibility. ie The ability to integrate managed content with other pages not controlled by the content manager. Thanks in advance for your help, Travis

    Read the article

  • jQuery / Loading content into div and changing url's (working but buggy)

    - by Bruno
    This is working, but I'm not being able to set an index.html file on my server root where i can specify the first page to go. It also get very buggy in some situations. Basically it's a common site (menu content) but the idea is to load the content without refreshing the page, defining the div to load the content, and make each page accessible by the url. One of the biggest problems here it's dealing with all url situations that may occur. The ideal would be to have a rel="divToLoadOn" and then pass it on my loadContent() function... so I would like or ideas/solutions for this please. Thanks in advance! //if page comes from URL if(window.location.hash != ''){ var url = window.location.hash; url = '..'+url.substr(1, url.length); loadContent(url); } //if page comes from an internal link $("a:not([target])").click(function(e){ e.preventDefault(); var url = $(this).attr("href"); if(url != '#'){ loadContent($(this).attr("href")); } }); //LOAD CONTENT function loadContent(url){ var contentContainer = $("#content"); //set load animation $(contentContainer).ajaxStart(function() { $(this).html('loading...'); }); $.ajax({ url: url, dataType: "html", success: function(data){ //store data globally so it can be used on complete window.data = data; }, complete: function(){ var content = $(data).find("#content").html(); var contentTitle = $(data).find("title").text(); //change url var parsedUrl = url.substr(2,url.length) window.location.hash = parsedUrl; //change title var titleRegex = /(.*)<\/title/.exec(data); contentTitle = titleRegex[1]; document.title = contentTitle; //renew content $(contentContainer).fadeOut(function(){ $(this).html(content).fadeIn(); }); }); }

    Read the article

  • PHP Inverting content adding (sorting)

    - by Adrian
    Hello, I have this code which will include "template.php" file from inside each of these folders: "content/templates/id1", "content/templates/id2", "content/templates/id3" etc. etc. $page_file = basename(__FILE__, ".php"); require("content/" . $page_file . "/content.php"); $iterator = new RecursiveIteratorIterator( new RecursiveDirectoryIterator($page_path), RecursiveIteratorIterator::SELF_FIRST); foreach($iterator as $file) { if($file->isDir()) { include strtoupper($file . '/template.php'); } } This code works pretty well, the problem is I want to inverse the content adding, meaning that I want first "content/templates/id9/template.php" included before "id8/template.php" and so on till the first.. How can I do this by modifying the code above? A million thanks!

    Read the article

  • Custom field not showing in Custom Content Type

    - by BeraCim
    Hi all: I created a custom column in a custom content type in a Sharepoint Web manually (e.g. /MySite/MyWeb). I now want to programmatically copy this content type across to another web (e.g. /MySite/MyWeb2). However, upon looping through the custom content type in code, I could only find 2 fields: Content Type and Title (expected: Title and custom column). The custom column was missing. I'm very sure that the content type and field are added at the web level. The custom content type is inherited from Item. When I loop through the web's fields, I can see the custom column, and that was copied to the new web. It is only within the content type that the custom column is not showing up. Any ideas why this is happening? Thanks.

    Read the article

  • Introducing the First Global Web Experience Management Content Management System

    - by kellsey.ruppel
    By Calvin Scharffs, VP of Marketing and Product Development, Lingotek Globalizing online content is more important than ever. The total spending power of online consumers around the world is nearly $50 trillion, a recent Common Sense Advisory report found. Three years ago, enterprises would have to translate content into 37 language to reach 98 percent of Internet users. This year, it takes 48 languages to reach the same amount of users.  For companies seeking to increase global market share, “translate frequently and fast” is the name of the game. Today’s content is dynamic and ever-changing, covering the gamut from social media sites to company forums to press releases. With high-quality translation and localization, enterprises can tailor content to consumers around the world.  Speed and Efficiency in Translation When it comes to the “frequently and fast” part of the equation, enterprises run into problems. Professional service providers provide translated content in files, which company workers then have to manually insert into their CMS. When companies update or edit source documents, they have to hunt down all the translated content and change each document individually.  Lingotek and Oracle have solved the problem by making the Lingotek Collaborative Translation Platform fully integrated and interoperable with Oracle WebCenter Sites Web Experience Management. Lingotek combines best-in-class machine translation solutions, real-time community/crowd translation and professional translation to enable companies to publish globalized content in an efficient and cost-effective manner. WebCenter Sites Web Experience Management simplifies the creation and management of different types of content across multiple channels, including social media.  Globalization Without Interrupting the Workflow The combination of the Lingotek platform with WebCenter Sites ensures that process of authoring, publishing, targeting, optimizing and personalizing global Web content is automated, saving companies the time and effort of manually entering content. Users can seamlessly integrate translation into their WebCenter Sites workflows, optimizing their translation and localization across web, social and mobile channels in multiple languages. The original structure and formatting of all translated content is maintained, saving workers the time and effort involved with inserting the text translation and reformatting.  In addition, Lingotek’s continuous publication model addresses the dynamic nature of content, automatically updating the status of translated documents within the WebCenter Sites Workflow whenever users edit or update source documents. This enables users to sync translations in real time. The translation, localization, updating and publishing of Web Experience Management content happens in a single, uninterrupted workflow.  The net result of Lingotek Inside for Oracle WebCenter Sites Web Experience Management is a system that more than meets the need for frequent and fast global translation. Workflows are accelerated. The globalization of content becomes faster and more streamlined. Enterprises save time, cost and effort in translation project management, and can address the needs of each of their global markets in a timely and cost-effective manner.  About Lingotek Lingotek is an Oracle Gold Partner and is going to be one of the first Oracle Validated Integrator (OVI) partners with WebCenter Sites. Lingotek is also an OVI partner with Oracle WebCenter Content.  Watch a video about how Lingotek Inside for Oracle WebCenter Sites works! Oracle WebCenter will be hosting a webinar, “Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter," tomorrow, September 13th. To attend the webinar, please register now! For more information about Lingotek for Oracle WebCenter, please visit http://www.lingotek.com/oracle.

    Read the article

  • Creating a seperate static content site for IIS7 and MVC

    - by JK01
    With reference to this serverfault blog post: A Few Speed Improvements where it talks about how static content for stackexchange is served from a separate cookieless domain... How would someone go about doing this on IIS7.5 for a ASP.NET MVC site? The plan so far: Register domain eg static.com, create a new website in IIS Manually copy the js / css / images folders from MVC as is so that they have the same paths on the new server Enable IIS gzip settings (js/css = high compression, images = none) Set caching with far future expiry dates <clientCache cacheControlCustom="public" /> in the web.config Never set any cookies on the static.com site Combine and minimize js / css Auto deploy changes in static content with WebDeploy Is this plan correct? And how can you use WebDeploy to deploy the whole web app to one server and then only the static items to another? I can see there is a similar question, but for apache: Creating a cookie-free domain to serve static content so it doesn't apply

    Read the article

  • Universal navigation menu across domains - would it be considered duplicate content?

    - by Jon Harley
    Across different sites on different second-level domains exists a universal navigation bar with a collection of roughly 30 links. This universal bar is exactly the same for every page on each domain. The bar's HTML, CSS and JavaScript are all stored in a subfolder for each domain and the HTML is embedded upon serving the page and is not being injected on the client side. None of the links use any rel directives and are as vanilla as can be. My question is about Google's duplicate content rule. Would something like this be considered duplicate content? Matt Cutt's blog post about duplicate content mentions boilerplate repetition, but then he mentions lengthy legalese. Since the text in this universal bar is brief and uses common terms, I wonder if this same rule applies. If this is considered duplicate content, what would be a good way to correct the problem?

    Read the article

  • Author Bio on all pages - Is it duplicate content?

    - by Rana Prathap
    In a website with user generated content, I provide a author bio under every article on the site. The author bio will be the same under every article the same author wrote. For some authors, the author bio is no longer then a couple of sentences, but for some descriptive writers, it is a good 100 words. These 100 words get repeated in almost 15 pages, some of them without substantial original content(such as haikus). Will this lead to duplicate content?

    Read the article

  • How to handle possible duplicate content across multiple sites?

    - by ElHaix
    Let's say I have two sites that cover the same vertical/topic. one in the USA and one in Canada. Both sites have local-related content, which is obviously unique by location. However they will share common news or blog pages. How do I avoid getting hit with duplicate content on both sites for those news/blog pages? If the content is exactly the same, I'm guessing I would have to pick which site's content I want to noindex,nofollow, is that correct, and if so, is that all I have to add on the URL links to those pages, and the pages' meta tags?

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • displaying first few pages of a pdf on a page = duplicate content?

    - by Ace
    I am embedding scribd pdfs on my website. These are exam papers pdf which are available on other websites. As it is scribd is an embed/iframe, I think google considers my page as being empty with no content; google does see iframe content right? So I decided to display the first pages of the pdf as text on the page for google. Then, for user experience, i hide the text and replace it with the scribd embed code using javascript. I have 2 worries about this method. Firstly, i am displaying the first pages of the pdf and the latter may be hosted on other websites, will this be considered as duplicate content. Secondly, I am hiding the content and replacing it with the scribd embed with javascript; is it considered bad by google?

    Read the article

  • Is it considered duplicate content when search results can be retrieved via 2 different urls? [closed]

    - by Floran
    Possible Duplicate: What is duplicate content and how can I avoid being penalized for it on my site? I'm building up friendly url's like so: http://www.1001locaties.nl/trouwlocaties But the same content can also be viewed when using the filter options on the left side, but via a different url: http://www.1001locaties.nl/locaties/?search=1&category=Trouwlocaties Is this considered duplicate content by Google? And if so: what can I do about it?

    Read the article

  • Squid proxy not serving modified html content

    - by Matthew
    I'm trying to use squid to modify the page content of web page requests. I followed the upside-down-ternet tutorial which showed instructions for how to flip images on pages. I need to change the actual html of the page. I've been trying to do the same thing as in the tutorial, but instead of editing the image I'm trying to edit the html page. Below is a php script I'm using to try to do it. All jpg images get flipped, but the content on the page does not get edited. The edited index.html files written contain the edited content, but the pages the users receive don't contain the edited content. #!/usr/bin/php <?php $temp = array(); while ( $input = fgets(STDIN) ) { $micro_time = microtime(); // Split the output (space delimited) from squid into an array. $temp = split(' ', $input); //Flip jpg images, this works correctly if (preg_match("/.*\.jpg/i", $temp[0])) { system("/usr/bin/wget -q -O /var/www/cache/$micro_time.jpg ". $temp[0]); system("/usr/bin/mogrify -flip /var/www/cache/$micro_time.jpg"); echo "http://127.0.0.1/cache/$micro_time.jpg\n"; } //Don't edit files that are obviously not html. $temp[0] contains url of file to get elseif (preg_match("/(jpg|png|gif|css|js|\(|\))/i", $temp[0], $matches)) { echo $input; } //Otherwise, could be html (e.g. `wget http://www.google.com` downloads index.html) else{ $time = time() . microtime(); //For unique directory names $time = preg_replace("/ /", "", $time); //Simplify things by removing the spaces mkdir("/var/www/cache/". $time); //Create unique folder system("/usr/bin/wget -q --directory-prefix=\"/var/www/cache/$time/\" ". $temp[0]); $filename = system("ls /var/www/cache/$time/"); //Get filename of downloaded file //File is html, edit the content (this does not work) if(preg_match("/.*\.html/", $filename)){ //Get the html file contents $contentfh = fopen("/var/www/cache/$time/". $filename, 'r'); $content = fread($contentfh, filesize("/var/www/cache/$time/". $filename)); fclose($contentfh); //Edit the html file contents $content = preg_replace("/<\/body>/i", "<!-- content served by proxy --></body>", $content); //Write the edited file $contentfh = fopen("/var/www/cache/$time/". $filename, 'w'); fwrite($contentfh, $content); fclose($contentfh); //Return the edited page echo "http://127.0.0.1/cache/$time/$filename\n"; } //Otherwise file is not html, don't edit else{ echo $input; } } } ?>

    Read the article

  • Combine two content encodings sections in a single page

    - by AmirGl
    I developed a web application that allows users to modify existing web pages. When a user type a url of an existing web page, I read the content of this page and using an ajax call, i display the content in a div inside my web application. Now my problem is that often the content encoding of the existing web page is different than my web app (I use utf-8) Is there a way to load content using an ajax call with different content encoding than the one of the main page? Thanks, Amir

    Read the article

  • Firewall - Preventing Content Theft & Rogue Crawlers

    - by drodecker
    Our websites are being crawled by content thieves on a regular basis. We obviously want to let through the nice bots and legitimate user activity, but block questionable activity. We have tried IP blocking at our firewall, but this becomes to manage the block lists. Also, we have used IIS-handlers, however that complicates our web applications. Is anyone familiar with network appliances, firewalls or application services (say for IIS) that can reduce or eliminate the content scrapers?

    Read the article

  • Where do I place XNA content pipeline references?

    - by Zabby Wabby
    I am relatively new to XNA, and have started to delve into the use of the content pipeline. I have already figured out that tricky issue of adding a game library containing classes for any type of .xml file I want to read. Here's the issue. I am trying to handle the reading of all XML content through use of an XMLHandler object that uses the intermediate deserializer. Any time reading of such data is required, the appropriate method within this object would be called. So, as a simple example, something like this would occur when a character levels: public Spell LevelUp(int levelAchived) { XMLHandler.FindSkillsForLevel(levelAchived); } This method would then read the proper .xml file, sending back the spell for the character to learn. However, the XMLHandler is having issues even being created. I cannot get it to use the using namespace of Microsoft.Xna.Framework.Content.Pipeline. I get an error on my using statement in the XMLHandler class: using Microsoft.Xna.Framework.Content.Pipeline.Serialization.Intermediate; The error is a typical reference error: Type or namespace name "'Pipeline' does not exist in the namespace 'Microsoft.Xna.Framework.Content' (are you missing an assembly reference?)" I THINK this is because this namespace is already referenced in my game's content. I would really have no issue placing this object within my game's content (since that is ALL it deals with anyways), but the Content project does not seem capable of holding anything but content files. In summary, I need to use the Intermediate Deserializer in my main project's logic, but, as far as I can make out, I can't safely reference the associated namespace for it outside of the game's content. I'm not a terribly well-versed programmer, so I may be just missing some big detail I've never learned here. How can I make this object accessible for all projects within the solution? I will gladly post more information if needed!

    Read the article

  • Creating ASP.NET MVC Negotiated Content Results

    - by Rick Strahl
    In a recent ASP.NET MVC application I’m involved with, we had a late in the process request to handle Content Negotiation: Returning output based on the HTTP Accept header of the incoming HTTP request. This is standard behavior in ASP.NET Web API but ASP.NET MVC doesn’t support this functionality directly out of the box. Another reason this came up in discussion is last week’s announcements of ASP.NET vNext, which seems to indicate that ASP.NET Web API is not going to be ported to the cloud version of vNext, but rather be replaced by a combined version of MVC and Web API. While it’s not clear what new API features will show up in this new framework, it’s pretty clear that the ASP.NET MVC style syntax will be the new standard for all the new combined HTTP processing framework. Why negotiated Content? Content negotiation is one of the key features of Web API even though it’s such a relatively simple thing. But it’s also something that’s missing in MVC and once you get used to automatically having your content returned based on Accept headers it’s hard to go back to manually having to create separate methods for different output types as you’ve had to with Microsoft server technologies all along (yes, yes I know other frameworks – including my own – have done this for years but for in the box features this is relatively new from Web API). As a quick review,  Accept Header content negotiation works off the request’s HTTP Accept header:POST http://localhost/mydailydosha/Editable/NegotiateContent HTTP/1.1 Content-Type: application/json Accept: application/json Host: localhost Content-Length: 76 Pragma: no-cache { ElementId: "header", PageName: "TestPage", Text: "This is a nice header" } If I make this request I would expect to get back a JSON result based on my application/json Accept header. To request XML  I‘d just change the accept header:Accept: text/xml and now I’d expect the response to come back as XML. Now this only works with media types that the server can process. In my case here I need to handle JSON, XML, HTML (using Views) and Plain Text. HTML results might need more than just a data return – you also probably need to specify a View to render the data into either by specifying the view explicitly or by using some sort of convention that can automatically locate a view to match. Today ASP.NET MVC doesn’t support this sort of automatic content switching out of the box. Unfortunately, in my application scenario we have an application that started out primarily with an AJAX backend that was implemented with JSON only. So there are lots of JSON results like this:[Route("Customers")] public ActionResult GetCustomers() { return Json(repo.GetCustomers(),JsonRequestBehavior.AllowGet); } These work fine, but they are of course JSON specific. Then a couple of weeks ago, a requirement came in that an old desktop application needs to also consume this API and it has to use XML to do it because there’s no JSON parser available for it. Ooops – stuck with JSON in this case. While it would have been easy to add XML specific methods I figured it’s easier to add basic content negotiation. And that’s what I show in this post. Missteps – IResultFilter, IActionFilter My first attempt at this was to use IResultFilter or IActionFilter which look like they would be ideal to modify result content after it’s been generated using OnResultExecuted() or OnActionExecuted(). Filters are great because they can look globally at all controller methods or individual methods that are marked up with the Filter’s attribute. But it turns out these filters don’t work for raw POCO result values from Action methods. What we wanted to do for API calls is get back to using plain .NET types as results rather than result actions. That is  you write a method that doesn’t return an ActionResult, but a standard .NET type like this:public Customer UpdateCustomer(Customer cust) { … do stuff to customer :-) return cust; } Unfortunately both OnResultExecuted and OnActionExecuted receive an MVC ContentResult instance from the POCO object. MVC basically takes any non-ActionResult return value and turns it into a ContentResult by converting the value using .ToString(). Ugh. The ContentResult itself doesn’t contain the original value, which is lost AFAIK with no way to retrieve it. So there’s no way to access the raw customer object in the example above. Bummer. Creating a NegotiatedResult This leaves mucking around with custom ActionResults. ActionResults are MVC’s standard way to return action method results – you basically specify that you would like to render your result in a specific format. Common ActionResults are ViewResults (ie. View(vn,model)), JsonResult, RedirectResult etc. They work and are fairly effective and work fairly well for testing as well as it’s the ‘standard’ interface to return results from actions. The problem with the this is mainly that you’re explicitly saying that you want a specific result output type. This works well for many things, but sometimes you do want your result to be negotiated. My first crack at this solution here is to create a simple ActionResult subclass that looks at the Accept header and based on that writes the output. I need to support JSON and XML content and HTML as well as text – so effectively 4 media types: application/json, text/xml, text/html and text/plain. Everything else is passed through as ContentResult – which effecively returns whatever .ToString() returns. Here’s what the NegotiatedResult usage looks like:public ActionResult GetCustomers() { return new NegotiatedResult(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return new NegotiatedResult("Show", repo.GetCustomer(id)); } There are two overloads of this method – one that returns just the raw result value and a second version that accepts an optional view name. The second version returns the Razor view specified only if text/html is requested – otherwise the raw data is returned. This is useful in applications where you have an HTML front end that can also double as an API interface endpoint that’s using the same model data you send to the View. For the application I mentioned above this was another actual use-case we needed to address so this was a welcome side effect of creating a custom ActionResult. There’s also an extension method that directly attaches a Negotiated() method to the controller using the same syntax:public ActionResult GetCustomers() { return this.Negotiated(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return this.Negotiated("Show",repo.GetCustomer(id)); } Using either of these mechanisms now allows you to return JSON, XML, HTML or plain text results depending on the Accept header sent. Send application/json you get just the Customer JSON data. Ditto for text/xml and XML data. Pass text/html for the Accept header and the "Show.cshtml" Razor view is rendered passing the result model data producing final HTML output. While this isn’t as clean as passing just POCO objects back as I had intended originally, this approach fits better with how MVC action methods are intended to be used and we get the bonus of being able to specify a View to render (optionally) for HTML. How does it work An ActionResult implementation is pretty straightforward. You inherit from ActionResult and implement the ExecuteResult method to send your output to the ASP.NET output stream. ActionFilters are an easy way to effectively do post processing on ASP.NET MVC controller actions just before the content is sent to the output stream, assuming your specific action result was used. Here’s the full code to the NegotiatedResult class (you can also check it out on GitHub):/// <summary> /// Returns a content negotiated result based on the Accept header. /// Minimal implementation that works with JSON and XML content, /// can also optionally return a view with HTML. /// </summary> /// <example> /// // model data only /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult(repo.Customers.OrderBy( c=> c.Company) ) /// } /// // optional view for HTML /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public class NegotiatedResult : ActionResult { /// <summary> /// Data stored to be 'serialized'. Public /// so it's potentially accessible in filters. /// </summary> public object Data { get; set; } /// <summary> /// Optional name of the HTML view to be rendered /// for HTML responses /// </summary> public string ViewName { get; set; } public static bool FormatOutput { get; set; } static NegotiatedResult() { FormatOutput = HttpContext.Current.IsDebuggingEnabled; } /// <summary> /// Pass in data to serialize /// </summary> /// <param name="data">Data to serialize</param> public NegotiatedResult(object data) { Data = data; } /// <summary> /// Pass in data and an optional view for HTML views /// </summary> /// <param name="data"></param> /// <param name="viewName"></param> public NegotiatedResult(string viewName, object data) { Data = data; ViewName = viewName; } public override void ExecuteResult(ControllerContext context) { if (context == null) throw new ArgumentNullException("context"); HttpResponseBase response = context.HttpContext.Response; HttpRequestBase request = context.HttpContext.Request; // Look for specific content types if (request.AcceptTypes.Contains("text/html")) { response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); } else if (request.AcceptTypes.Contains("text/plain")) { response.ContentType = "text/plain"; response.Write(Data); } else if (request.AcceptTypes.Contains("application/json")) { using (JsonTextWriter writer = new JsonTextWriter(response.Output)) { var settings = new JsonSerializerSettings(); if (FormatOutput) settings.Formatting = Newtonsoft.Json.Formatting.Indented; JsonSerializer serializer = JsonSerializer.Create(settings); serializer.Serialize(writer, Data); writer.Flush(); } } else if (request.AcceptTypes.Contains("text/xml")) { response.ContentType = "text/xml"; if (Data != null) { using (var writer = new XmlTextWriter(response.OutputStream, new UTF8Encoding())) { if (FormatOutput) writer.Formatting = System.Xml.Formatting.Indented; XmlSerializer serializer = new XmlSerializer(Data.GetType()); serializer.Serialize(writer, Data); writer.Flush(); } } } else { // just write data as a plain string response.Write(Data); } } } /// <summary> /// Extends Controller with Negotiated() ActionResult that does /// basic content negotiation based on the Accept header. /// </summary> public static class NegotiatedResultExtensions { /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated( repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, object data) { return new NegotiatedResult(data); } /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="viewName">Name of the View to when Accept is text/html</param> /// /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, string viewName, object data) { return new NegotiatedResult(viewName, data); } } Output Generation – JSON and XML Generating output for XML and JSON is simple – you use the desired serializer and off you go. Using XmlSerializer and JSON.NET it’s just a handful of lines each to generate serialized output directly into the HTTP output stream. Please note this implementation uses JSON.NET for its JSON generation rather than the default JavaScriptSerializer that MVC uses which I feel is an additional bonus to implementing this custom action. I’d already been using a custom JsonNetResult class previously, but now this is just rolled into this custom ActionResult. Just keep in mind that JSON.NET outputs slightly different JSON for certain things like collections for example, so behavior may change. One addition to this implementation might be a flag to allow switching the JSON serializer. Html View Generation Html View generation actually turned out to be easier than anticipated. Initially I used my generic ASP.NET ViewRenderer Class that can render MVC views from any ASP.NET application. However it turns out since we are executing inside of an active MVC request there’s an easier way: We can simply create a custom ViewResult and populate its members and then execute it. The code in text/html handling code that renders the view is simply this:response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); which is a neat and easy way to render a Razor view assuming you have an active controller that’s ready for rendering. Sweet – dependency removed which makes this class self-contained without any external dependencies other than JSON.NET. Summary While this isn’t exactly a new topic, it’s the first time I’ve actually delved into this with MVC. I’ve been doing content negotiation with Web API and prior to that with my REST library. This is the first time it’s come up as an issue in MVC. But as I have worked through this I find that having a way to specify both HTML Views *and* JSON and XML results from a single controller certainly is appealing to me in many situations as we are in this particular application returning identical data models for each of these operations. Rendering content negotiated views is something that I hope ASP.NET vNext will provide natively in the combined MVC and WebAPI model, but we’ll see how this actually will be implemented. In the meantime having a custom ActionResult that provides this functionality is a workable and easily adaptable way of handling this going forward. Whatever ends up happening in ASP.NET vNext the abstraction can probably be changed to support the native features of the future. Anyway I hope some of you found this useful if not for direct integration then as insight into some of the rendering logic that MVC uses to get output into the HTTP stream… Related Resources Latest Version of NegotiatedResult.cs on GitHub Understanding Action Controllers Rendering ASP.NET Views To String© Rick Strahl, West Wind Technologies, 2005-2014Posted in MVC  ASP.NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • GNU/Linux interactive table content GUI editor?

    - by sdaau
    I often find myself in the need to gather data (say from the internet), into a table, for comparison reasons. I usually need the final table output in HTML or MediaWiki mostly, but often times also Latex. My biggest problem is that I often forget the correct table syntax for these markup languages, as well as what needs to be properly escaped in the inline data, for the table to render correctly. So, I often wish there was a GUI application, which provides a tabular framework - which I could stick "Always on Top" as a desktop window, and I could paste content into specific cells - before finally exporting the table as a code in the correct language. One application that partially allows this is Open/LibreOffice calc: The good thing here is that: I can drag and drop browser content into a specifically targeted table cell (here B2) "Rich" text / HTML code gets pasted For long content, the cell (column) width stays put as it originally was The bad thing is, that: when the cell height (due to content size) becomes larger than the calc window, it becomes nearly impossible to scroll calc contents up and down (at least with the mousewheel), as the view gets reset to top-right corner of the selected cell calc shows an "endless"/unlimited field of cells, so not exactly a "table" - which I find visually very confusing (and cognitively taxing) Can only export table to HTML What I would need is an application that: Allows for a limited size table, but with quick adding of rows and columns (e.g. via corresponding + buttons) Allows for quick setup of row and column height and width (as well as table size) Stays put at those sizes, regardless of size of content pasted in; if cell content overflows, cell scrollbars are shown (cell content could be possibly re-edited in a separate/new window); if table overflows over window size, window scrollbars are shown Exports table in multiple formats (I'd need both HTML and mediawiki), properly escaping cell content for each (possibility to strip HTML tags from content pasted in cells, to get plain text, is a plus) Targeting a specific cell in the table for the content paste operation is a must - it doesn't have to be drag'n'drop though, a right click over a cell with "Paste content" is enough. I'd also want the ability to click in a specific cell and type in (plain text) content immediately. So, my question is: is there an application out there that already does something like this? The reason I'm asking is that - as the screenshots show - for instance Libre/OpenOffice allows it, but only somewhat (as using it for that purpose is tedious). I know there exist some GUI editors for Linux (both for UI like guile or HTML like amaya); but I don't know them enough to pinpoint if any of them would offer this kind of functionality (and at least in my searches, that kind of functionality, if present in diverse software, seems not to be advertised). Note I'm not interested in styling an HTML table, which is why I haven't used "table designer" in the title, but "table editor" (in lack of better terms) - I'm interested in (quickly) adjusting row/column size of the table, and populating it with pasted data (which is possibly HTML) in a GUI; and finally exporting such a table as self-contained HTML (or other) code.

    Read the article

  • PHP Calculating Text to Content Ratio

    - by James
    I am using the following code to calculate text to code ratio. I think it is crazy that no one can agree on how to properly calculate the result. I am looking any suggestions or ideas to improve this code that may make it more accurate. <?php // Returns the size of the content in bytes function findKb($content){ $count=0; $order = array("\r\n", "\n", "\r", "chr(13)", "\t", "\0", "\x0B"); $content = str_replace($order, "12", $content); for ($index = 0; $index < strlen($content); $index ++){ $byte = ord($content[$index]); if ($byte <= 127) { $count++; } else if ($byte >= 194 && $byte <= 223) { $count=$count+2; } else if ($byte >= 224 && $byte <= 239) { $count=$count+3; } else if ($byte >= 240 && $byte <= 244) { $count=$count+4; } } return $count; } // Collect size of entire code $filesize = findKb($content); // Remove anything within script tags $code = preg_replace("@<script[^>]*>.+</script[^>]*>@i", "", $content); // Remove anything within style tags $code = preg_replace("@<style[^>]*>.+</style[^>]*>@i", "", $content); // Remove all tags from the system $code = strip_tags($code); // Remove Extra whitespace from the content $code = preg_replace( '/\s+/', ' ', $code ); // Find the size of the remaining code $codesize = findKb($code); // Calculate Percentage $percent = $codesize/$filesize; $percentage = $percent*100; echo $percentage; ?> I don't know the exact calculations that are used so this function is just my guess. Does anyone know what the proper calculations are or if my functions are close enough for a good judgement.

    Read the article

  • Overwriting the content from one MOSS content database to another

    - by 78lro
    We have a content database on our live moss server. It contains one site collection with several sub-sites. I'm using the stsadm export command to produce a cmp file, then moving this to our test server in a different farm. I then want to import this content into the content database on our test farm, using the import stsadm command results in me being left with all the existing test data as well as the live data. I tried detaching the existing content database from test in central admin and creating a new empty one,to the then run the import against that but the import failed as obviously there's not root site in the empty db. The aim is to have the data on test look like live, clearing out all the test data. Can anyone suggest a good approach to this type of problem?

    Read the article

  • ASP.NET control in one content area needs to reference a control in another content area

    - by harrije
    I have a master page that divides the main content into two areas. There are two asp:ContentPlaceHolder controls in the body section of the master page with IDs cphMain and cphSideBar respectively. One of the corresponding content pages has a control in cphMain that needs to refer to a control in cphSideBar. Specifically, a SqlDataSource in cphMain references a TextBox in cphSideBar to use as a parameter in the select command. When the content page loads the following run-time error occurs: Could not find control 'TextBox1' in ControlParameter 'date_con'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Could not find control 'TextBox1' in ControlParameter 'date_con'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Could not find control 'TextBox1' in ControlParameter 'date_con'.] System.Web.UI.WebControls.ControlParameter.Evaluate(HttpContext context, Control control) +1753150 System.Web.UI.WebControls.Parameter.UpdateValue(HttpContext context, Control control) +47 System.Web.UI.WebControls.ParameterCollection.UpdateValues(HttpContext context, Control control) +114 System.Web.UI.WebControls.SqlDataSource.LoadCompleteEventHandler(Object sender, EventArgs e) +43 System.EventHandler.Invoke(Object sender, EventArgs e) +0 System.Web.UI.Page.OnLoadComplete(EventArgs e) +8698566 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +735 I kinda know what the problem is... ASP.NET does not like the fact that the SqlDataSource and TextBox are in different asp:Content controls within the content page. As a workaround, I have another TextBox in cphMain with the SqlDataSource which has Visible=False. Then in the Page_Load() event handler the contents of the TextBox in cphSideBar is copied into the contents of the non-visible TextBox in cphMain. I get the results I want with the work around I've come up with, but it seems like such a hack. I was wondering if there is a better solution I'm missing. Please advise.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >