Search Results

Search found 6412 results on 257 pages for 'cache'.

Page 45/257 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page? I already tried adding something like this: <add extension="RetrieveBlob.aspx" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:00:30" varyByQueryString="assetId, assetFileId" /> But it doesn't work. An example of an url is: http://{server}/{application}/trunk/RetrieveBlob.aspx?assetId=31809&assetFileId=11829

    Read the article

  • Organizing memcache keys

    - by Industrial
    Hi! Im trying to find a good way to handle memcache keys for storing, retrieving and updating data to/from the cache layer in a more civilized way. Found this pattern, which looks great, but how do I turn it into a functional part of a PHP application? The Identity Map pattern: http://martinfowler.com/eaaCatalog/identityMap.html Thanks!

    Read the article

  • JBoss: Authentication caches wrong login credentials

    - by aliaslan
    I am using JBoss AS 4.2.3 JBossSeam 2.1 My Problem is that I can login/logout with different users as long as I do not enter a wrong password for one user. If this happens it is not possible to authenticate any user. Authentication always fails. If I delete the browser cookies everything works fine. I have tried to set DefaultCacheTimeout and DefaultCacheResolution to 0 but without luck. Why does JBoss cache wrong credentials?

    Read the article

  • config sqlcachedependency

    - by hotyi
    i know SqlCacheDependency could push the modified data from db to the Cache, i want to know if the I could config the sqlcachedependency not push the modified data immediately, but push every 1 minute.

    Read the article

  • Website files caching?

    - by purpler
    I want to know how long certain files like css, html and js are desirable to be cached by .htaccess setting and why different time setting for each file type? In few examples i saw that someone cache html for 10 mins, js for a month and imagery for a year.

    Read the article

  • Approach for authentication and storing user details.

    - by cappuccino
    Hey folks, I am using the Zend Framework but my question is broadly about sessions / databases / auth (PHP MySQL). Currently this is my approach to authentication: 1) User signs in, the details are checked in database. - Standard stuff really. 2) If the details are correct only the user's unique ID is stored in the session and a security token (user unique ID + IP + Browser info + salt). The session in written to the filesystem. I've been reading around and many are saying that storing stuff in sessions is not a good idea, and that you should really only write a unique ID which refers back to the user's details and a security token to prevent session hijacking. So this is the approach i've taken, i use to write the user's details in session, but i've moved that out. Wanted to know your opinions on this. I'm keeping sessions in the filesystem since i don't run on multiple servers, and since i'm only writting a tiny tiny bit of data to sessions, i thought that performance would be greater keeping sessions in the filesystem to reduce load on the database. Once the session is written on authentication, it really is only read-only from then on. 3) The rest of the user's details (like subscription details, permissions, account info etc) are cached in the filesystem (this can always be easily moved to memory if i wanted even more performance). So rather than keeping the user's details in session, the user's details are cached in the file system. I'm using Zend_Cache and the unique cache id is something like md5(/cache/auth/2892), the number is the unique id of the user. I guess the benefit of this method is that once the user is logged in, there is essentially not database queries being run to get the user's details. Just wonder if this approach is better than keeping the whole lot in session... 4) As the user moves throughout the site the only thing that is checked is the ID in the session and the security token. So, overall the first question is 1) is the filesystem more efficient than a database for this purpose 2) have i taken enough security precautions 3) is separating user detail's from the session into a cached file a pointless task? Thanks.

    Read the article

  • roll out of java web start clients

    - by user271858
    We are about to roll out a client server application build with java web start. Our main server is located in a country in Europe and we will have many users downloading our client the first day from all over the world. Since the client application is quite big in MB our wan will be utilized a lot. Is there a way to cache or pre- distribute java web start clients to servers "closer" to the user (for example a local server)? Thanks.

    Read the article

  • Rails page caching and flash messages

    - by KJF
    I'm pretty sure I can page cache the vast majority of my site but the one thing preventing me from doing so is that my flash messages will not show, or they'll show at the wrong time. One thing I'm considering is writing the flash message to a cookie, reading it and displaying it via javascript and clearing the cookie once the message has been displayed. Has anyone had any success doing this or are there better methods? Thanks.

    Read the article

  • Rails caches_page :index in Wrong Location

    - by Andy
    I have a controller Projects in my Rails app with: caches_page :index However, instead of the cached file being generated at /public/projects/index.html it is located at /public/projects.html. The web server (currently Mongrel) looks for */ directories before *.html files. So the http://…/projects request is routed through Rails and my index cache file is never served. How can I tell caches_page :index to generate the file at /public/projects/index.html instead?

    Read the article

  • JSP page is not getting reflected in Jetty

    - by [email protected]
    I am using Jetty web server, and while loading JSP page, its getting loaded from cache and not from the server. my updated jsp page is not getting reflected. I was deleted my file and checked out from CVS also. There is no use, the same result came again. What will be the reason ? Thanks in advance.

    Read the article

  • How to disable caching in Rails by IP address?

    - by huacnlee
    I was used caches_page/caches_action for some page, it set expire in a time(1 hour or 1 day), I don't expire cache when the data updated. When the editors create or update the content them can't view the new result in the page. I want to disable the global caching when the visitor IP in my company. How to do it?

    Read the article

  • Silverlight database

    - by numanwon
    I'm looking for an embedded database for a silverlight application.I have found several choices like MCObjects(www.mcobject.com) and EffiProz(www.effiproz.com). Which one is most suitable as local cache?

    Read the article

  • WURFL's New API Question

    - by silent
    Hi all, I have a simple stupid question. Here's the problem, for years, my company has been using the old WURFL PHP API. Now, with the release of the new WURFL PHP API that promises a better performance, we want to use it. What I want to ask is, how do I update the cache? Can I use the old *update_cache.php*?

    Read the article

  • Output Caching with IIS7 - How To for a dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag): <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page? I already tried adding something like this: <add extension="RetrieveBlob.aspx" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:00:30" varyByQueryString="assetId, assetFileId" /> But it doesn't work. An example of an url is: http://{server}/{application}/trunk/RetrieveBlob.aspx?assetId=31809&assetFileId=11829

    Read the article

  • Can't Install php5-msql

    - by user210445
    Hello friends I'm finishing the process of installing Apache/Php/mysql installations but this shows up: # sudo apt-get install mysql-server php5-msql Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php5-msql After some adjustments this happened: angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • PowerDNS CNAME with multiple A records produces unexpected results

    - by bwight
    This problem from what i can tell is isolated to PowerDNS. The servers are running two packages pdns-static-3.0.1-1.i386.rpm and pdns-recursor-3.3-1.i386.rpm on the most recent version of Amazon Linux. The amazon ec2 loadbalancers are assigned a CNAME with multiple hosts. Below is an example of the actual behavior. Notice how the hosts are always in the same order. [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb Expected behavior is round robin for the hosts [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb The addresses eventually do swap but it seems to be on a 30 minute cache timer changing the TTL of the record doesn't appear to affect anything. It appears as though the resolver has a cache of the response. This adversely affects my application because all of the load is only being sent to one of the loadbalancers (Availability Zones) so if I have servers in two zones then only one zone is under load at a time. Do you know how I can fix this so that each time the host is resolved the order of the addresses is alternating.

    Read the article

  • spring jboss ehcache

    - by boyd4715
    I am trying to configure my application to make use of ehCache. I am using Spring 2.5.6, Jboss 5.1.0 GA and its embedded version of Hibernate along with ehCache-core V2.3.1. I have done the following configuration: <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.jdbc.batch_size">20</prop> <prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop> <prop key="net.sf.ehcache.configurationResourceName">ehcache.xml</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_structured_entries">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.generate_statistics">true</prop> <!-- prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</prop> <prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop--> </props> </property> This is my ehcache.xml <defaultCache eternal="false" overflowToDisk="false" maxElementsInMemory="50000" timeToIdleSeconds="30" timeToLiveSeconds="6000" memoryStoreEvictionPolicy="LRU" /> <cache name="com.model.SystemProperty" maxElementsInMemory="5000" eternal="true" overflowToDisk="false" memoryStoreEvictionPolicy="LFU" /> This file is located in my class path. I have added the following to my domain object: @Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region="vsg.ecotrak.admin.store.domain.Store", include="non-lazy") When I start up the server, it gets stuck. Here is the output: 13:17:09,000 INFO [SettingsFactory] Second-level cache: enabled 13:17:09,000 INFO [SettingsFactory] Query cache: enabled 13:17:09,016 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 13:17:09,017 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider Any Ideal as to why this is happening? I am running on a Windows 7 64 bit if that matters. I downgraded the ehcache jar to V 1.2.3 and the server now starts.

    Read the article

  • Creating PDF documents dynamically using Umbraco and XSL-FO part 2

    - by Vizioz Limited
    Since my last post I have made a few modifications to the PDF generation, the main one being that the files are now dynamically renamed so that they reflect the name of the case study instead of all being called PDF.PDF which was not a very helpful filename, I just wanted to get something live last week, so decided that something was better than nothing :)The issue with the filenames comes down to the way that the PDF's are being generated by using an alternative template in Umbraco, this means that all you need to do is add " /pdf " to the end of a case study URL and it will create a PDF version of the case study. The down side is that your browser will merrily download the file and save it as PDF.PDF because that is the name of the last part of the URL.What you need to do is set the content-disposition header to be equal to the name you would like the file use, Darren Ferguson mentioned this on the Change the name of the PDF forum post.We have used the same technique for downloading dynamically generated excel files, so I thought it would be useful to create a small macro to set both this header and also to set the caching headers to prevent any caching issues, I think in the past we have experienced all possible issues, including various issues where IE behaves differently to other browsers when you are using SSL and so the below code should work in all situations!The template for the PDF alternative template is very simple:<%@ Master Language="C#" MasterPageFile="~/umbraco/masterpages/default.master" AutoEventWireup="true" %><asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolderDefault" runat="server"> <umbraco:Macro Alias="PDFHeaders" runat="server"></umbraco:Macro> <umbraco:Macro xsl="FO-CaseStudy.xslt" Alias="PDFXSLFO" runat="server"></umbraco:Macro></asp:Content>The following code snippet is the XSLT macro that simply creates our file name and then passes the file name into the helper function:<xsl:template match="/"> <xsl:variable name="fileName"> <xsl:text>Vizioz_</xsl:text> <xsl:value-of select="$currentPage/@nodeName" /> <xsl:text>_case_study.pdf</xsl:text> </xsl:variable> <xsl:value-of select="Vizioz.Helper:AddDocumentDownloadHeaders('application/pdf', $fileName)"/> </xsl:template>And the following code is the helper function that clears the current response and adds all the appropriate headers:public static void AddDocumentDownloadHeaders(string contentType, string fileName){ HttpResponse response = HttpContext.Current.Response; HttpRequest request = HttpContext.Current.Request; response.Clear(); response.ClearHeaders(); if (request.IsSecureConnection & request.Browser.Browser == "IE") { // Don't use the caching headers if the browser is IE and it's a secure connection // see: http://support.microsoft.com/kb/323308 } else { // force not using the cache response.AppendHeader("Cache-Control", "no-cache"); response.AppendHeader("Cache-Control", "private"); response.AppendHeader("Cache-Control", "no-store"); response.AppendHeader("Cache-Control", "must-revalidate"); response.AppendHeader("Cache-Control", "max-stale=0"); response.AppendHeader("Cache-Control", "post-check=0"); response.AppendHeader("Cache-Control", "pre-check=0"); response.AppendHeader("Pragma", "no-cache"); response.Cache.SetCacheability(HttpCacheability.NoCache); response.Cache.SetNoStore(); response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1)); } response.AppendHeader("Expires", DateTime.Now.AddMinutes(-1).ToLongDateString()); response.AppendHeader("Keep-Alive", "timeout=3, max=993"); response.AddHeader("content-disposition", "attachment; filename=\"" + fileName + "\""); response.ContentType = contentType;}I will write another blog soon with some more details about XSL-FO and how to create the PDF's dynamically.Please do re-tweet if you find this interest :)

    Read the article

  • CacheAdapter 2.4 – Bug fixes and minor functional update

    - by Glav
    Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: <Glav.CacheAdapter.MainConfig> <setting name="CacheToUse" serializeAs="String"> <value>memcached</value> </setting> <setting name="DistributedCacheServers" serializeAs="String"> <value>localhost:11211</value> </setting> <setting name="IsCacheEnabled" serializeAs="String"> <value>False</value> </setting> </Glav.CacheAdapter.MainConfig> Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time: var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

    Read the article

  • Drupal's not reading correct values from DB

    - by John
    Hey Everyone, Here is my current problem. I am working with the chat module and I'm building a module that notifies users via AJAX that they have been invited to a chat. The current table structure for the invites table looks like this: |-------------------------------------------------------------------------| | CCID | NID | INVITER_UID | INVITEE_UID | NOTIFIED | ACCEPTED | |-------------------------------------------------------------------------| | int | int | int | int | (0 or 1) | (0 or 1) | |-------------------------------------------------------------------------| I'm using the periodical updater plug-in for JQuery to continually poll the server to check for invites. When an invite is found, I set the notified from 0 to 1. However, my problem is the periodical updater. When I first see that there is an invite, I notify the user, and set notified to 1. On the next select though, I get the same results before, as if the update didn't work. But, when I got check the database, I can see that it worked just fine. It's as if the query is querying a cache, but I can't figure it out. My code for the periodical updater is as follows: window.onload = function() { var uid = $('a#chat_uid').html(); $.PeriodicalUpdater( '/steelylib/sites/all/modules/_chat_whos_online/ajax/ajax.php', //url to service { method: 'get', //send data via... data: {uid: uid}, //data to send minTimeout: '1000', //min time before server is polled (milli-sec.) maxTimeout: '20000', //max time before server is polled (milli-sec.) multiplyer: '1.5', //multiply against curretn poll time every time constant data is returned type: 'text', //type of data recieved (response type) maxCalls: 0, //max calls to make (0=unlimited) autoStop: 0 //max calls with constant data (0=unlimited/disabled) }, function(data) //callback function { alert( data ); //for now, until i get it working } ); } And my code for the ajax call is as follows: <?php #bootstrap Drupal, and call function, passing current user's uid. function _create_chat_node_check_invites($uid) { cache_clear_all('chatroom_chat_list', 'cache'); $query = "SELECT * FROM {chatroom_chat_invite} WHERE notified=0 AND invitee_uid=%d and accepted=0"; $query_results = db_query( $query, $uid ); $json = '{"invites":['; while( $row = db_fetch_object($query_results) ) { var_dump($row); global $base_url; $url = $base_url . '/content/privatechat' . $uid .'-' . $row->inviter_uid; $inviter = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->inviter_uid ) ); $invitee = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->invitee_uid ) ); #reset table $query = "UPDATE {chatroom_chat_invite} " ."SET notified=1 " ."WHERE inviter_uid=%d AND invitee_uid=%d"; db_query( $query, $row->inviter_uid, $row->invitee_uid ); $json .= '['; $json .= '"' . $url . '",'; $json .= '"' . ($inviter->name) . '",'; $json .= '"' . ($invitee->name) . '"' ; $json .= '],'; } $json = substr($json, 0, -1); $json .= ']}'; return $json; } ?> I can't figure out what is going wrong, any help is greatly appreciated!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >