Search Results

Search found 24641 results on 986 pages for 'content provider'.

Page 34/986 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • A web app provider has asked for specific browser config

    - by Matthew
    They have asks to turn off caching on our browsers. I was aghast that they would ask such a thing. I said to them; To avoid caching it is best practice to use; <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> This should work across all browsers. Their reply was; We need to refresh javascript at runtime, this will not help us – any more ideas? I replied; Unsure what you mean by “refresh javascript at runtime”. If you are using ajax, browser caching can effect the XMLHttpRequest open method. Adding these meta tags to the source has fixed this for me in the past. Browser caching only caches resources, it should have no effect on site scripting. These meta tags will bypass browser caching. This is a reasonable request, isn't it?

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   [Read more] 

    Read the article

  • Is it wise to store a big lump of json on a database row

    - by Ieyasu Sawada
    I have this project which stores product details from amazon into the database. Just to give you an idea on how big it is: [{"title":"Genetic Engineering (Opposing Viewpoints)","short_title":"Genetic Engineering ...","brand":"","condition":"","sales_rank":"7171426","binding":"Book","item_detail_url":"http://localhost/wordpress/product/?asin=0737705124","node_list":"Books > Science & Math > Biological Sciences > Biotechnology","node_category":"Books","subcat":"","model_number":"","item_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=128","details_url":"http://localhost/wordpress/product/?asin=0737705124","large_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/large-notfound.png","medium_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/medium-notfound.png","small_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/small-notfound.png","thumbnail_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/thumbnail-notfound.png","tiny_img":"http://localhost/wordpress/wp-content/plugins/ecom/img/tiny-notfound.png","swatch_img":"http://localhost/wordpress/wp-content/plugins/ecom/img/swatch-notfound.png","total_images":"6","amount":"33.70","currency":"$","long_currency":"USD","price":"$33.70","price_type":"List Price","show_price_type":"0","stars_url":"","product_review":"","rating":"","yellow_star_class":"","white_star_class":"","rating_text":" of 5","reviews_url":"","review_label":"","reviews_label":"Read all ","review_count":"","create_review_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=132","create_review_label":"Write a review","buy_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=19186","add_to_cart_action":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/add_to_cart.php","asin":"0737705124","status":"Only 7 left in stock.","snippet_condition":"in_stock","status_class":"ninstck","customer_images":["http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/31FIM-YIUrL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg"],"disclaimer":"","item_attributes":[{"attr":"Author","value":"Greenhaven Press"},{"attr":"Binding","value":"Hardcover"},{"attr":"EAN","value":"9780737705126"},{"attr":"Edition","value":"1"},{"attr":"ISBN","value":"0737705124"},{"attr":"Label","value":"Greenhaven Press"},{"attr":"Manufacturer","value":"Greenhaven Press"},{"attr":"NumberOfItems","value":"1"},{"attr":"NumberOfPages","value":"224"},{"attr":"ProductGroup","value":"Book"},{"attr":"ProductTypeName","value":"ABIS_BOOK"},{"attr":"PublicationDate","value":"2000-06"},{"attr":"Publisher","value":"Greenhaven Press"},{"attr":"SKU","value":"G0737705124I2N00"},{"attr":"Studio","value":"Greenhaven Press"},{"attr":"Title","value":"Genetic Engineering (Opposing Viewpoints)"}],"customer_review_url":"http://localhost/wordpress/wp-content/ecom-customer-reviews/0737705124.html","flickr_results":["http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/5105560852_06c7d06f14_m.jpg"],"freebase_text":"No around the web data available yet","freebase_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/freebase-notfound.jpg","ebay_related_items":[{"title":"Genetic Engineering (Introducing Issues With Opposing Viewpoints), , Good Book","image":"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg","url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=12165","currency_id":"$","current_price":"26.2"},{"title":"Genetic Engineering Opposing Viewpoints by DAVID BENDER - 1964 Hardcover","image":"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg","url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=130","currency_id":"AUD","current_price":"11.99"}],"no_follow":"rel=\"nofollow\"","new_tab":"target=\"_blank\"","related_products":[],"super_saver_shipping":"","shipping_availability":"","total_offers":"7","added_to_cart":""}] So the structure for the table is: asin title details (the product details in json) Will the performance suffer if I have to store like 10,000 products? Is there any other way of doing this? I'm thinking of the following, but the current setup is really the most convenient one since I also have to use the data on the client side: store the product details in a file. So something like ASIN123.json store the product details in one big file. (I'm guessing it will be a drag to extract data from this file) store each of the fields in the details in its own table field Thanks in advance!

    Read the article

  • jQuery - Loading content into div, styles not applied?

    - by Kenny Bones
    Hi! I'm trying to get this content loader to work and I've managed to get it to get new content, once the content is loaded it isn't styled correctly. Also the character "é" becomes a questionmark. Doctype problem? As well as the h2 tag normally having Cufon applied to it is not triggering. So basically, this content loader require me to have a bunch of pages being essentially the same, except for the content I want to retreice. This way, users can use the actual URL as you'd normally exect. Only when a link is clicked on an already loaded page, it's only the content from the #content div that's realle being replaced. I can post code here, but I think it's better to just watch it happen on the testpage. It's very low on graphics btw ;) http://www.matkalenderen.no Just click the blue text link and you'll see it. Also, the red button on the second loaded content is supposed to revert the content back to previous. But it's not being triggered or something. What's happening?

    Read the article

  • SQL Server 2008 to Sybase Linked Server (x64) -- Provider and permissions issues

    - by Cory Larson
    Good morning, We're testing a new SQL Server 2008 setup (64-bit) and one of our requirements was to get a linked server up and talking to a Sybase database. We've successfully done so using Sybase's 64-bit 15.5 drivers, however I can't expand the catalog list from a remote machine (connecting to the '08 box with SSMS) without having my network account being added as an Administrator on the actual box and then using Windows Authentication to connect to the server instance. This is going to be problematic when we go live. Has anybody experienced this, or have any input on the permissions in SQL Server 2008 with regards to linked servers? If I remove my network account from the Administrators group, the big error I'm getting is a 'Msg 7302, Level 16, State 1, Line 41' with a description something like "Cannot create an instance of OLE DB provider "ASEOLEDB" for linked server "", and all research points to permissions issues. Thoughts? This document talks about DCOM configuration and permissions, but we've tried all of it with no luck. Thanks

    Read the article

  • Apache port forwarding with ZTE ZXV10 W300 router (provider specific firmware)

    - by dannote
    I'm trying to configure port forwarding for Apache 2.2 installed on Windows XP SP3 with ZTE ZXV10 W300 router. The computer has a static IP 192.168.1.2. Port forwarding is configured as following: Enable true Name Apache Protocol TCP (also tried TCP and UPD) WAN Host Start IP Address empty WAN Host End IP Address empty WAN Connection stream WAN Start Port 8080 WAN End Port 8080 LAN Host IP Address 192.168.1.2 LAN Host Start Port 8080 LAN Host End Port 8080 Port 8080 is open for both TCP and UPD in Windows Brandmauer. Apache configuration: Listen 192.168.1.2:8080 Router Firmware: Hardware Version V1.0.01 Software Version V8.0.02T03_CFA Boot Loader Version V1.1.2 The provider is COMSTAR. I'm not sure but it's said they flash routers with modified firmware. I have also tried to set up Bitcomet port forwarding on port 13514 and failed.

    Read the article

  • Apache port forwarding with ZTE ZXV10 W300 router (provider specific firmware)

    - by dannote
    I'm trying to configure port forwarding for Apache 2.2 installed on Windows XP SP3 with ZTE ZXV10 W300 router. The computer has a static IP 192.168.1.2. Port forwarding is configured as following: Enable true Name Apache Protocol TCP (also tried TCP and UPD) WAN Host Start IP Address empty WAN Host End IP Address empty WAN Connection stream WAN Start Port 8080 WAN End Port 8080 LAN Host IP Address 192.168.1.2 LAN Host Start Port 8080 LAN Host End Port 8080 Port 8080 is open for both TCP and UPD in Windows Brandmauer. Apache configuration: Listen 192.168.1.2:8080 Router Firmware: Hardware Version V1.0.01 Software Version V8.0.02T03_CFA Boot Loader Version V1.1.2 The provider is COMSTAR. I'm not sure but it's said they flash routers with modified firmware. I have also tried to set up Bitcomet port forwarding on port 13514 and failed.

    Read the article

  • The MAPI call 'OpenMsgStore' failed: The MAPI provider failed Exchange 2003

    - by realitnzsam
    Hi guys, Recently we moved our Exchange 2003 (SP2) database from one drive to another. Now every other day or so we get errors coming up in the event log: Event Type: Error Event Source: MSExchangeSA Event Category: MAPI Session Event ID: 9175 Date: 10/03/2010 Time: 8:06:15 a.m. User: N/A Computer: SERVER Description: The MAPI call 'OpenMsgStore' failed with the following error: The attempt to log on to the Microsoft Exchange Server computer has failed. The MAPI provider failed. Microsoft Exchange Server Information Store ID no: 8004011d-0512-00000000 For more information, click http://www.microsoft.com/contentredirect.asp. Restarting the Exchange Information Store fixes this instantly, but until we do it Outlook won't connect to Exchange and Blackberry emails aren't pushing out.

    Read the article

  • DNS Provider/Domain Registrar

    - by Arcath
    I have a whole bunch of domains with my current web host and when i got the package i got it with a few gig of web space and a bunch of mysql databases but times have changed and now and i don't use the hosting im paying for, and i just my host as a DNS server to forward everything else where. The process of removing the host is going to require me to transfer all the domains to another package etc... which is going to cause disruption so my question is: Who is the best provider for DNS only? I don't want any space or mail just someone to hold the domains and let me set any DNS options I want (A/MX/CNAME records for everything, even possibly the ability to point my domains at my own DNS server).

    Read the article

  • CentOS 6.5 SVN https - Unknown DAV provider: svn

    - by Programster
    I am trying to setup a CentOS 6.5 64bit server with SVN over HTTPS. Unfortunately after configuring the /etc/httpd/conf.d/subversion.conf file as follows (changed paths): <Location /repos> DAV svn SVNParentPath /path/to/svn/repos # Limit write permission to list of valid users <LimitExcept GET PROPFIND OPTIONS REPORT> # Require SSL connection for password protection SSLRequireSSL AuthType Basic AuthName "Authorization Realm" AuthUserFile /path/to/passwdfile Require valid-user </LimitExcept> </Location> I get the following error message when restarting http: Starting httpd: Syntax error on line 3 of /etc/httpd/conf.d/subversion.conf: Unknown DAV provider: svn I have triple checked that I have the mod_dav_svn package already installed: Package mod_dav_svn-1.6.11-10.el6_5.x86_64 already installed and latest version Is my config wrong or are there other packages I need to set up?

    Read the article

  • VPN provider for remote access to servers from a known IP address

    - by brentkeller
    My organization has a few servers that are being hosted by a provider and we limit remote access to a whitelist and deny access to any IPs not on the whitelist. We would like to find a hosted VPN service that we can connect to that would give us a known IP that we could add to our whitelist and gain access to the servers while on the road. Does anyone know of any such services? I don't think we can just setup the VPN built in to Windows Server since the servers are hosted. Any suggestions would be appreciated.

    Read the article

  • IE Search Provider: Specifying gTLD / Country-Specific Site

    - by jwa
    I am based in the UK, and as such typically use google.co.uk as my search engine. However, my employer is based in continental Europe, and thus my internet proxy is located overseas. As a result, IP geo-location presents a location outside of the UK. Google detects this, and as a result will redirect my searches from the address bar to a foreign Google domain. This leads to "local" answers having a higher ranking, many of which are not written in English language! Is there a specific search provider / URL I can give to IE which will use a specific gTLD of google (.co.uk), rather than performing the location-based redirect?

    Read the article

  • remove an open wifi of a service provider near my house

    - by wannik
    I'm using Win 7. There is an access point of a service provider wifi near my house. The wifi is not free and not WEB/WPA protected. Everyone can connect to it and it will show the company's login page. (if the username/password are put correctly, their customer can use the net.) I'm not the customer of the company and have my own access point in my house. But my computer always connect to the network of that company. I tried to remove the network but it keep coming back and connect to that access point instead of mine. What can I do to make my computer choose my access point first?

    Read the article

  • Taking advantage of Windows Azure CDN and Dynamic Pages in ASP.NET - Caching content from hosted services

    - by Shawn Cicoria
    With the updates to Windows Azure CDN announced this week [1] I wanted to help illustrate the capability with a working sample that will serve up dynamic content from an ASP.NET site hosted in a WebRole. First, to get a good overview of the capability you can read the Overview of the Windows Azure CDN [2] content on MSDN. When you setup the ability to cache content from a hosted service, the requirement is to provide a path to your role’s DNS endpoint that ends in the path “/cdn”.  Additionally, you then map CDN to that service. What WAZ CDN does, is allow you to then map that through the CDN to your host.  The CDN will then make a request to your host on your client’s behalf. The requirement is still that your client, and any Url’s that are to be serviced through the CDN and this capability have to use the CDN DNS name and not your host – no different than what CDN does for Blog storage. The following 2 URL’s are samples of how the client needs to issue the requests. Windows Azure hosted service URL: http: //myHostedService.cloudapp.net/cdn/music.aspx   - for regular “dynamic” content Windows Azure CDN URL: http: //<identifier>.vo.msecnd.net/music.aspx   - for CDN “cachable” content. The first URL path’s the request direct to your host into the Azure datacenter.  The 2nd URL paths the request through the CDN infrastructure, where CDN will make the determination to request the content on behalf of the client to the Azure datacenter and your host on the /cdn path. The big advantage here is you can apply logic to your content creation.  What’s important is emitting the CDN friendly headers that allow CDN to request and re-request only when you designate based upon it’s rules of “staleness” as described in the overview page. With IIS7.5 there is an underlying issue when the Managed Module “OutputCache” is enabled that in order to emit a good header for your content, you’ll need to remove, and in my sample, helps provide CDN friendly headers.  You get IIS 7.5 when running under OS Family “2” in your service configuration. By default, and when the OutputCache managed module is loaded, if you use the HttpResponse.CachePolicy to set the Http Headers for “max-age” when the HttpCacheability is “Public”, you will NOT get the “max-age” emitted as part of the “Cache-control:” header.  Instead, the OutputCache module will remove “max-age” and just emit “public”.  It works ok when Cacheability is set to “private”. To work around the issue and ensure your code as follows emits the full max-age along with the public option, you need to remove as follows: <system.webServer>   <modules runAllManagedModulesForAllRequests="true">     <remove name="OutputCache"/>   </modules> </system.webServer>   Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(rv));   In the attached solution, the way I approached it was to have a VirtualApplication under the root site that has it’s own web.config  - this VirtualApplication is the /cdn of the site and when deployed to Azure as a Web Role will surface as a distinct IIS Application – along with a separate AppDomain. The CDN Sample is a simple Web Forms site that the /default landing page contains 3 IFrames to host: 1. Content direct from the host @   http://xxxx.cloudapp.net/cdn 2. Content via the CDN @ http://azxxx.vo.msecnd.net  3. Simple list of recent requests – showing where the request came from.   When you run the sample the first time you hit the page, both the Host and the CDN will cause 2 initial requests to hit the host.  You won’t see the first requests in the list because of timing – but if you refresh, you’ll see that the list will show that you have 2 requests initially. 1. sourced direct from the Browser to the HOST 2. sourced via the CDN The picture above shows the call-outs of each of those requests – green rows showing requests coming direct to the HOST, yellow showing the CDN request.  The IP addresses of the green items are direct from the client, where the CDN is from the CDN data center. As you refresh the page (hit Ctrl+F5 to force a full refresh and avoid “304 – not changed”) you’ll see that the request to the HOST get’s processed direct; but the request to the CDN endpoint is serviced direct from the CDN and doesn’t incur any additional request back to the HOST. The following is the Headers from the CDN response (Status-Line) HTTP/1.1 200 OK Age 13 Cache-Control public, max-age=300 Connection keep-alive Content-Length 6212 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:14 GMT Expires Fri, 11 Mar 2011 20:52:01 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   The following are the Headers from the HOST response (Status-Line) HTTP/1.1 200 OK Cache-Control public, max-age=300 Content-Length 6189 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:15 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   You can see that with the CDN request, the countdown (age) starts for aging the content. The full sample is located here: CDNSampleSite.zip [1] http://blogs.msdn.com/b/windowsazure/archive/2011/03/09/now-available-updated-windows-azure-sdk-and-windows-azure-management-portal.aspx [2] http://msdn.microsoft.com/en-us/library/ff919703.aspx

    Read the article

  • Great ADF Content at Collaborate 12

    - by Shay Shmeltzer
    If you are attending the Collaborate 12 conference this month in Las Vegas and you are interested in Oracle ADF you are going to be very busy. There are more than 20 sessions covering ADF and a special Wednesday ADF Enterprise Methodology Group event focused on ADF. Session range from how to get started to deep technical dives and real world war stories of customers and their implementations. Also don't forget to drop by the Oracle ADF booth at the Oracle Demoground and say hello. Here is a quick list of session that list ADF as a keyword in their content: Sun. Apr. 22 9613 A Fusion Approach to Building Unified and Scalable Applications With Rich User Experience 1:00 pm - 2:00 pm Mon. Apr. 23 669 Fusion DBA Boot Camp: Tailoring Your Application to Customer Needs in a Upgrade-safe Way - Support in ADF and Fusion Apps 9:45 am - 10:45 am Mon. Apr. 23 438 Oracle Fusion Applications Security 9:45 am - 10:45 am Mon. Apr. 23 176 How to get started with Oracle ADF 12:15 pm - 12:45 pm Mon. Apr. 23 330 Fusion DBA Boot Camp: Implementing Self-Service Portals for Partners/Distributors Using EBS/WebCenter/Fusion Technologies 1:15 pm - 2:15 pm Mon. Apr. 23 288 Working with Portlets in ADF and Webcenter 2:30 pm - 3:30 pm Tue. Apr. 24 503 Who’s Converting My Portal? 2:00 pm - 3:00 pm Tue. Apr. 24 9370 Coexistence of Oracle E-Business Suite and Oracle Fusion Applications: Platform Perspective 2:00 pm - 3:00 pm Wed. Apr. 25 647 Developing Custom BI Solutions - OBIEE vs. Oracle Application Development Framework (ADF) 9:30 am - 10:30 am Wed. Apr. 25 173 ADF: A Path to the Future for Dinosaur Nerds 11:00 am - 12:00 pm Wed. Apr. 25 581 How Will You Build Your Next System? 11:00 am - 12:00 pm Wed. Apr. 25 10351 Integrating CRM On Demand With the E-Business Suite to Supercharge Your Sales Team 1:00 pm - 2:00 pm Wed. Apr. 25 9348 Mobile,ADF, Coherence and Live Data Streaming? A Herbalife Case Study 1:00 pm - 2:00 pm Wed. Apr. 25 566 Getting Started with ADF 1:00 pm - 2:00 pm Wed. Apr. 25 775 WebCenter Portal Template Design and Development Best Practices 1:00 pm - 2:00 pm Wed. Apr. 25 791 Surfacing Oracle Social Network into Your Business Applications 1:00 pm - 2:00 pm Wed. Apr. 25 9407 The Latest Oracle E-Business Suite Release User Interface and Usability Enhancements 1:00 pm - 2:00 pm Wed. Apr. 25 100080 Extending JD Edwards with Oracle ADF and Oracle SOA Suite 3:00 pm - 4:00 pm Wed. Apr. 25 172 JDeveloper ADF and the Oracle database – friends not foes 3:00 pm - 4:00 pm Wed. Apr. 25 595 Achieving Real-Time Social Collaboration in WebCenter 11g 3:00 pm - 4:00 pm Wed. Apr. 25 164 ADF + Faces: Do I Have to Write ANY Java code? 4:15 pm - 5:15 pm Thu. Apr. 26 257 Mobile App Development with Oracle ADF Mobile: develop once and for all 8:30 am - 9:30 am Thu. Apr. 26 177 Understanding Oracle ADF and its role in Oracle Fusion Middleware 9:45 am - 10:45 am Thu. Apr. 26 523 Making Next-Generation Mobile Apps With The Latest ADF Mobile Tools 9:45 am - 10:45 am Thu. Apr. 26 356 ADF Integration with WebCenter Content 11:00 am - 12:00 pm

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • how can a firefox extension detect content-type of the page loaded ?

    - by bosky101
    since my extension's pageload is triggered even when I view css or js files, i want to add another check that triggers my extension only when the current page's content-type is text/html . //eg: at my page load handler function onPageload(){ // only want to proceed if content-type reflects a text/html or */html page if ( contentTypeIsHtml() ){ //continue here } } what should contentTypeIsHtml() do ?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >