Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 100/587 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • FDE / SSD - partition and leave some unencrypted?

    - by Web Design Hero
    Just bought a used beast of a desktop pc. The system drive is setup as a Raid 0 SSD (Intel 510 SSD Drives) with 128 each. I will probably not have to many programs beyond office and maybe Adobe CS if I spring for it, I will be keeping big data on a regular hdd. My question is about setting up TrueCrypt with my configuration. I have not previously done full disk encryption, but I feel that its probably a good idea. I have done some speed tests using file containers on the hdd and the sdd with truecrypt. While there is a huge hit with the SSDs and Truecrypt, it still outperforms the hdd on its own by a good margin, so I think i will be okay for my needs with truecrypt. I have seen in a few places that they recommend partitioning the drive and leavign some of the SSD not inside truecrypt, does this really make a difference? If so, how much should I leave? Will there be any issue in the Raid0 configuration? I am not really concerned about all the wear leveling issue, rather loose data and be secure, but since I don't need all that space neccesarily, I would like to optimize my setup for security and speed.

    Read the article

  • How to access files on a USB-connected NTFS disk removed from a Win7 notebook?

    - by yosh m
    My daughter seems to have fried her motherboard in her Lenovo Notebook. The disk seems to be fine. I removed the disk and used a universal disk-to-USB kit to attach it to another computer. The disk is recognized fine and I can peruse it in Windows Explorer. The problem is that the files she would like to recover from it are located in places that Windows refuses to let me access. When I try, for example, to enter the directory "Documents and Settings" it gives me an "Access is denied" error. Same thing when I try to go into the various User directories and other locations. I thought to try creating a Ghost image & retrieve the files from that, but Ghost seems to croak when I try to run it - apparently it doesn't like accessing the disk via a USB connection (even though I've told it to install the drivers for USB). Any other ideas about how to get to the files I need, either through Windows or perhaps some other OS that I could boot from a CD that can read an NTFS disk? Thanks, Yosh

    Read the article

  • How set the response header's Cache-Control in Sitecore?

    - by user822211
    By default, it looks like Sitecore does not cache pages. In web.config, set this <setting name="DisableBrowserCaching" value="false"/> and create pipeline processor page.Response.Cache.SetExpires(DateTime.Now.AddSeconds(60)); page.Response.Cache.SetCacheability(HttpCacheability.Public); but it did not work, the repose head stays no-cache. By the way, I add the pipeline in the renderLayout, anyone knows? thanks!

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • Bad motherboard / controller / HDs?

    - by quidpro
    On a leased server, I am running into some timing issues with an application that requires precise timing. Server is a Dual Xeon E5410 running on a Supermicro X7DVL-3 motherboard under CentOs 5.5 x64. The application I am running is timer sensitive and keeps sensing drift whether under load or at idle, but especially under load. I did some investigating with atop and dd and found some mind-blowing numbers. Mind you, I am no Linux guru but something sure seems out of whack. I ran: dd bs=4096 if=/dev/zero of=/bigtestfile to generate disk activity. Regardless whether I wrote it to sda or sdb my DSK value in atop would go over 100%, at one time peaking at 1700%. Again it does not matter if I am writing to sda or sdb. DSK | sdb | busy 675% | read 0 | write 110 | avio 78 ms | Here are the smartctl outputs: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 165 165 021 Pre-fail Always - 2750 4 Start_Stop_Count 0x0032 100 100 040 Old_age Always - 21 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000a 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 065 065 000 Old_age Always - 25831 10 Spin_Retry_Count 0x0012 100 253 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 116 093 000 Old_age Always - 27 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0012 200 200 000 Old_age Always - 0 199 UDMA_CRC_Error_Count 0x000a 200 253 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 # smartctl -A /dev/sdb smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 180 180 021 Pre-fail Always - 3958 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 22 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 9 Power_On_Hours 0x0032 068 068 000 Old_age Always - 24087 10 Spin_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 122 096 000 Old_age Always - 25 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0009 200 200 051 Pre-fail Offline - 0 Any idea what's wrong here? Bad motherboard? It would seem rare that both drives are going bad (smartctl says they PASS_, so it leaves the mobo as the culprit in my eyes.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Converting an EC2 AMI to vmdk image

    - by Reed G. Law
    I've come quite close to getting Amazon Linux to boot inside VirtualBox, thanks to this answer and these websites. A quick overview of the steps I've taken: Launch EC2 instance with Amazon Linux 2011.09 64-bit AMI dd the contents of the EBS volume over ssh to a local image file. Mount the image file as a loopback device and then to a local mount point. Create a new empty disk image file, partition with an offset for a bootloader, and create an ext4 filesystem. Mount the new image's partition and copy everything from the EC2 image. Install grub (using Ubuntu's grub-legacy-ec2 package, not grub2). Convert the image file to vmdk using qemu-img. Create a new VirtualBox VM with the vmdk. Now the VM boots, grub loads, and the kernel is found. But it fails when it tries to mount the root device: dracut Warning: No root device "block:/dev/xvda1" found dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. dracut Warning: Signal caught! dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.35.14-107.1.39.amzn1.x86_64 #1 I have tried changing /boot/grub/menu.lst to find the root device by label and UUID, but nothing works. I'm guessing the xen kernel is not compatible with VirtualBox. The reasoning behind all this effort is to make a Vagrant box that is as close to possible as the production enviroment, so deploys can be tested locally. I know it's cheap to do test runs on EC2, but poor connectivity often ruins the experience. Plus it would be really nice to have a virtual machine with the production environment so that co-workers don't have to install everything under the sun just to get up and running with app development. If I were to try running a different kernel, what kernel could I get to be as close as possible to Amazon Linux 2011.09?

    Read the article

  • Varnish cache and PHP session; setting header?

    - by StCee
    Varnish by default would not cache page with cookies. I read on some posts that one workaround for PHP pages is to set header('Cache-Control: public, s-maxage=60'); in php pages. But would it makes Varnish cache the page with the session cookie? Session is started on that page, and although there is nothing personal on that page, I would still want the session to persist in case the user would do something private later. So is there a way to cache the page without the session cookie? And still be able to pass session between pages? I can imagine some sort of weird solution with hidden form, but I would prefer if it can be done with VCL configuration or header setting. Thanks a lot!

    Read the article

  • Android 2.1 View's getDrawingCache() method always returns null

    - by Impression
    Hello, I'm working with Android 2.1 and have the following problem: Using the method View.getDrawingCache() always returns null. Example code: public void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final View view = findViewById(R.id.ImageView01); view.setDrawingCacheEnabled(true); view.buildDrawingCache(); final Bitmap bmp = view.getDrawingCache(); System.out.println(bmp); } I've already tried different ways to configure the View object for generating the drawing cache (e.g. View.setWillNotDraw(boolean) and View.setWillNotCacheDrawing(boolean)), but nothing works. What is the right ways, or what do I wrong?

    Read the article

  • ASP.NET MVC Response Filter + OutputCache Attribute

    - by Shane Andrade
    I'm not sure if this is an ASP.NET MVC specific thing or ASP.NET in general but here's what's happening. I have an action filter that removes whitespace by the use of a response filter: public class StripWhitespaceAttribute : ActionFilterAttribute { public StripWhitespaceAttribute () { } public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); filterContext.HttpContext.Response.Filter = new WhitespaceFilter(filterContext.HttpContext.Response.Filter); } } When used in conjunction with the OutputCache attribute, my calls to Response.WriteSubstitution for "donut hole caching" do not work. The first and second time the page loads the callback passed to WriteSubstitution get called, after that they are not called anymore until the output cache expires. I've noticed this with not just this particular filter but any filter used on Response.Filter... am I missing something? I also forgot to mention I've tried this without the use of an MVC action filter attribute by attaching to the PostReleaseRequestState event in the global.asax and setting the Response.Filter value there... but still no luck.

    Read the article

  • Caching the xml response from a HttpHandler

    - by Blankman
    My HttpHandler looks like: public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/xml"; XmlWriter writer = new XmlTextWriter(context.Response.OutputStream, Encoding.UTF8); writer.WriteStartDocument(); writer.WriteStartElement("ProductFeed"); DataTable dt = GetStuff(); for(...) { } writer.WriteEndElement(); writer.WriteEndDocument(); writer.Flush(); writer.Close(); } How can I cache the entire xml document that I am generating? Or do I only have the option of caching the DataTable object?

    Read the article

  • How to the view count of a question in momery?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. When a user visited a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save them in the database, because the records will be huge. So I want to keep them in the memory, and persistent the number to database every 10 minutes. I have searched about "cache" of rails, but I haven't found an example. I need an simple sample of how to do this, thanks for help~

    Read the article

  • How to save the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • How to the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • A trigger on hibernate (maybe this is called interceptor)

    - by Ittai
    Hi, I have a module which uses Hibernate as an ORM solution with EHCache as second level cache. I have another seperate module which inserts and updates the database. What I need is to have the ability to trigger an event when a row is inserted or updated. Let's say I have a Customers table and it is mapped to a Customer entity. I want some procedure to notify me that a new Customer has been added. Regarding the second seperate module it uses Hibernate also but at least for the time being they are not connected (I'm pointing this out as if someone thinks that I must share the Hibernate session (or something of the sort) between them then this is something I will consider). Please note that I have limited experience with Hibernate. Thanks in advance

    Read the article

  • stupid caching in asp.net

    - by kusanagi
    i use such code string.Format("<img src='{0}'><br>", u.Avatar); u.Avatar-it's like '/img/path/pic.jpg' but in this site i can upload new image instead old pic.jpg. so picture new, but name is old. and browser show OLD picture (cache). if i put random number like /img/path/pic.jpg?123 then works fine, but i need it only ufter upload, not always. how can i solve this? ?????????? ??????

    Read the article

  • Neither IE or Firefox respects the control values that are output

    - by Luke Rohde
    I'm writing a survey designer asp.net mvc. It has buttons to move questions up and down. The buttons post the whole form back and the affected questions are swapped on the server. When the form returns the only thing that is changed are the values for each survey question. Both firefox and IE seem to ignore this change. Nothing is persisted to the database (until save) and url doesn't change so the post just returns the same view but I've stepped through my code to ensure the sequence of values being rendered in the view reflects the swap which is ok. However "view - source" doesn't show the change suggesting caching issue (maybe auto complete). I've tried autocomplete="off" in my form. Response.Cache.SetNoStore(); in my global.asax [System.Web.Mvc.OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")] before my controller and the following in my page header NOTHING!!! This must be real common. Anyone got a clue?

    Read the article

  • Cannot see changes made to aspx

    - by Senica Gonzalez
    I'm trying to edit an aspx page...mainly javascript, and I randomly see changes that I've made when refreshing. I'm using jquery, but I'm not sure that jquery is the culprit here. For example. If I add a simple alert("hello"); in the page I'm calling, I do not see it take place until I have cleared all my temp files and cache, closed my browser opened back up...and even then, sometimes, I still don't see my changes. Any ideas?

    Read the article

  • Prevent IE caching

    - by Parhs
    I am developing a Java EE web application using Struts. The problem is with Internet Explorer caching. If an user logs out he can access some pages because they are cached and no request is made. If I hit refresh it works fine. Also if an user goes to login page again it won't redirect him because that page is also cached. Two solutions come to my mind: Writing an Interceptor (servlet filter like) to add to response header no-cache etc. Or or put <meta> tags at each page. Which one should I do?

    Read the article

  • Nginx is sending proxy saved conent in gzip format

    - by Sandeep Manne
    Hi I used config given in this http://www.webtatic.com/blog/2008/04/page-level-caching-with-nginx/ for page level caching of php content the problem is that the cached page is saving in gzip format and it returning same gzip content to browser. I need the o/p like this "12:15:37 12:15:47" (Its coming for 1st time when the page is not cached) after that if request is resend it is returning ‹??????34²26±24à23Œ¸¸?`Î9”??? (gzip response as I tried zcat its returning fine) Response Headers Server nginx/0.8.34 Date Wed, 17 Mar 2010 07:04:58 GMT Content-Type text/html Last-Modified Wed, 17 Mar 2010 07:04:20 GMT Transfer-Encoding chunked Connection keep-alive Vary Accept-Encoding Expires Wed, 17 Mar 2010 07:04:58 GMT Cache-Control max-age=0 Content-Encoding gzip Request Headers Host localhost User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.18) Gecko/2010021501 Ubuntu/9.04 (jaunty) Firefox/3.0.18 GTB6 Accept text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive

    Read the article

  • Instant Messenger: How does gtalk/yahoo messenger populate the contact list?

    - by Owen
    Hi All, We are currently working on a small IM project which pretty much works like gtalk and yahoo messenger. We came across a problem that puzzled us how gtalk/ym populate their contact lists. Given that the user has let's say more or less 500 contacts, both IMs seem to readily load the contacts pretty fast and already sorted. Here are my questions(referring to either): Does it cache its contacts, like saving it in a file somewhere upon exit so that upon log-in it readily extracts the contacts and displays it in its contact list? Does it always request for the VCARDS upon log in? OR they have a VCARD push or whatever that simply updates the contacts' profiles (like that of their status [presence push - available, busy, etc...] )?

    Read the article

  • ASP.NET OutPutCache VaryByParam and VaryByHeader with AJAX

    - by DennyDotNet
    I'm trying to do some caching using VaryByParam AND VaryByHeader. When an AJAX request comes in I return a partial XHTML. When a regular request comes in I send the partial XHTML page with header / footer. I tried to cache the page by doing: [OutputCache( Duration = 5, VaryByParam = "nickname,page", VaryByHeader = "X-Requested-With" )] However this doesn't work... if I do a regular request first then run the AJAX call I get the full cached page instead of the partial and vice-versa. Seems like VaryByHeader is being ignored. Is it because X-Requested-With is omitted on normal requests? Or perhaps it's doing VaryByParam OR VaryByHeader? My obvious way around this is for AJAX requests to call a different method which only returns partial pages, however I'd like to avoid that if possible. I'm using ASP.NET MVC 1.0 with the OutputCacheAttribute.

    Read the article

  • Run php in Drupal template uncached

    - by lokust
    Hi. I need to run a php code snippet in a Drupal template and not have it cached. The php snippet sniffs for a cookie and if found returns one of messages according to the cookie value. ie: if(isset($_GET['key'])) { $cookievalue = $_GET['key']; } if(isset($_COOKIE['cookname'])) { $cookievalue = $_COOKIE['cookname']; } switch ($cookievalue) { case hmm01: echo "abc"; break; case hmm02: echo "def"; break; case hmm03: echo "ghi"; break; default: echo "hello"; } Right now, Drupal displays the messages randomly according to when the page was first caches and with which cookie - no good at all! I can't see a great deal of info on how I might go about this - it seems that you have to turn the cache off for the page rather than run php code uncached.

    Read the article

  • Rails Asset Caching Breaks First few page loads

    - by Brian Armstrong
    We're using Rails asset caching for JS and CSS like this: <%= javascript_include_tag :defaults, 'autocomplete', 'searchbox', 'jqmodal', :cache => set_asset_cache(:admins) %> In our deploy we call rake tmp:assets:clear each time. The problem is that the first few page loads after a deploy come up with no css on the page. I guess until the cached all.js and all.css have been regenerated. We deploy many times per day and this is scary for any users who happen to come across a busted page. Have people found any way to make this smoother so the new cached assets are guaranteed to be there on the first new page load?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >