Search Results

Search found 7323 results on 293 pages for 'cache manifest'.

Page 70/293 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • this error appeared when upgrating 12.04 LTS to 12.10 [closed]

    - by habcity
    Possible Duplicate: How do I fix a “Problem with MergeList” error when trying to do an update? ryder@ryder-Q1500M:~$ do-release-upgrade Checking for a new Ubuntu release Get:1 Upgrade tool signature [198 B] Get:2 Upgrade tool [1,200 kB] Fetched 1,200 kB in 6s (6,988 B/s) authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' extracting 'quantal.tar.gz' [sudo] password for ryder: Reading cache A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/update-manager-63XThv/quantal", line 10, in sys.exit(main()) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 237, in main save_system_state(logdir) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 130, in save_system_state scrub_sources=True) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 146, in save_state self._write_state_installed_pkgs(sourcedir, tar) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 173, in _write_state_installed_pkgs cache = self._cache_cls(rootdir=sourcedir) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in init self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 145, in open self._cache = apt_pkg.Cache(progress) SystemError: E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en, E:The package lists or status file could not be parsed or opened.

    Read the article

  • Avoiding and Identifying False Sharing Among Threads

    In symmetric multiprocessor (SMP) systems, each processor has a local cache. The memory system must guarantee cache coherence. False sharing occurs when threads on different processors modify variables that reside on the same cache line. Learn methods to detect and correct false sharing.

    Read the article

  • dpkg snippet isn't working for me when fixing / installing dockbar

    - by user3924
    I'm trying to use this code from webupd8 to fix the problem I have. It seems it didn't work. sudo dpkg -i --force-overwrite /var/cache/apt/archives/smplayer_0.6.9+svn3595-1ppa1~maverick1_i386.deb Here is the output from my attempt to install dockbar : $ sudo dpkg -i --force-all var/apt/cache/archives/dockmanager_0.1.0~bzr80-0ubuntu1~10.10~dockers1_i386.deb dpkg: error processing var/apt/cache/archives/dockmanager_0.1.0~bzr80-0ubuntu1~10.10~dockers1_i386.deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: var/apt/cache/archives/dockmanager_0.1.0~bzr80-0ubuntu1~10.10~dockers1_i386.deb Is there any way around this?

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • Issue accessing remote Infinispan mbeans

    - by user1960172
    I am able to access the Mbeans by local Jconsole but not able to access the MBEANS from a remote Host. My COnfiguration: <?xml version='1.0' encoding='UTF-8'?> <server xmlns="urn:jboss:domain:1.4"> <extensions> <extension module="org.infinispan.server.endpoint"/> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.clustering.jgroups"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.modcluster"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> </extensions> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <local default-user="$local"/> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <local default-user="$local" allowed-users="*"/> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.2"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE" autoflush="true"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="org.jboss.as.config"> <level name="DEBUG"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:infinispan:server:endpoint:6.0"> <hotrod-connector socket-binding="hotrod" cache-container="clustered"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> <memcached-connector socket-binding="memcached" cache-container="clustered"/> <!--<rest-connector virtual-server="default-host" cache-container="clustered" security-domain="other" auth-method="BASIC"/> --> <rest-connector virtual-server="default-host" cache-container="clustered" /> <websocket-connector socket-binding="websocket" cache-container="clustered"/> </subsystem> <subsystem xmlns="urn:jboss:domain:datasources:1.1"> <datasources/> </subsystem> <subsystem xmlns="urn:infinispan:server:core:5.3" default-cache-container="clustered"> <cache-container name="clustered" default-cache="default"> <transport executor="infinispan-transport" lock-timeout="60000"/> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER"> <locking isolation="READ_COMMITTED" acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> </distributed-cache> <distributed-cache name="memcachedCache" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER"> <locking isolation="READ_COMMITTED" acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> </distributed-cache> <distributed-cache name="namedCache" mode="SYNC" start="EAGER"/> </cache-container> <cache-container name="security"/> </subsystem> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jgroups:1.2" default-stack="${jboss.default.jgroups.stack:udp}"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <protocol type="PING"/> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL"/> <protocol type="pbcast.NAKACK"/> <protocol type="UNICAST2"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> <protocol type="RSVP"/> </stack> <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <!--<protocol type="MPING" socket-binding="jgroups-mping"/>--> <protocol type="TCPPING"> <property name="initial_hosts">10.32.50.53[7600],10.32.50.64[7600]</property> </protocol> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> <protocol type="FD"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK"> <property name="use_mcast_xmit">false</property> </protocol> <protocol type="UNICAST2"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> <protocol type="RSVP"/> </stack> </subsystem> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector use-management-endpoint="false"/> </subsystem> <subsystem xmlns="urn:jboss:domain:modcluster:1.1"> <mod-cluster-config advertise-socket="modcluster" connector="ajp" excluded-contexts="console"> <dynamic-load-provider> <load-metric type="busyness"/> </dynamic-load-provider> </mod-cluster-config> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.2"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:security:1.2"> <security-domains> <security-domain name="other" cache-type="infinispan"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="infinispan"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"> <thread-factory name="infinispan-factory" group-name="infinispan" priority="5"/> <unbounded-queue-thread-pool name="infinispan-transport"> <max-threads count="25"/> <keepalive-time time="0" unit="milliseconds"/> <thread-factory name="infinispan-factory"/> </unbounded-queue-thread-pool> </subsystem> <subsystem xmlns="urn:jboss:domain:transactions:1.2"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <connector name="ajp" protocol="AJP/1.3" scheme="http" socket-binding="ajp"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="example.com"/> </virtual-server> </subsystem> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:10.32.222.111}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:10.32.222.111}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8089"/> <socket-binding name="hotrod" port="11222"/> <socket-binding name="http" port="8080"/> <socket-binding name="https" port="8443"/> <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45688"/> <socket-binding name="jgroups-udp-fd" port="54200"/> <socket-binding name="memcached" port="11211"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.115" multicast-port="23364"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <socket-binding name="websocket" port="8181"/> </socket-binding-group> </server> Remote Process: service:jmx:remoting-jmx://10.32.222.111:4447 I added user to both management and application realm admin=2a0923285184943425d1f53ddd58ec7a test=2b1be81e1da41d4ea647bd82fc8c2bc9 But when i try to connect its says's: Connection failed: Retry When i use Remote process as:10.32.222.111:4447 on the sever it prompts a warning : 16:29:48,084 ERROR [org.jboss.remoting.remote.connection] (Remoting "djd7w4r1" read-1) JBREM000200: Remote connection failed: java.io.IOException: Received an invali d message length of -2140864253 Also disabled Remote authentication: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=12345 Still not able to connect. Any help will be highly appreciated . Thanks

    Read the article

  • Is there a way to tell Drupal not to cache a specific page?

    - by TechplexEngineer
    I have a custom php page that processes a feed of images and makes albums out of it. However whenever i add pictures to my feed, the Drupal page doesn't change until I clear the caches. Is there a way to tell Drupal not to cache that specific page? Thanks, Blake Edit: Drupal v6.15 Not exactly sure what you mean oswald, team2648.com/media is hte page. I used the php interpreter module. Here is the php code: <?php //////// CODE by Pikori Web Designs - pikori.org /////////// //////// Please do not remove this title, /////////// //////// feel free to modify or copy this software /////////// $feedURL = 'http://picasaweb.google.com/data/feed/base/user/Techplex.Engineer?alt=rss&kind=album&hl=en_US'; $photoNodeNum = 4; $galleryTitle = 'Breakaway Pictures'; $year = '2011'; ?> <?php /////////////// DO NOT EDIT ANYTHING BELOW THIS LINE ////////////////// $album = $_GET['album']; if($album != ""){ //GENERATE PICTURES $feedURL= "http://".$album."&kind=photo&hl=en_US"; $feedURL = str_replace("entry","feed",$feedURL); $sxml = simplexml_load_file($feedURL); $column = 0; $pix_count = count($sxml->channel->item); //print '<h2>'.$sxml->channel->title.'</h2>'; print '<table cellspacing="0" cellpadding="0" style="font-size:10pt" width="100%"><tr>'; for($i = 0; $i < $pix_count; $i++) { print '<td align="center">'; $entry = $sxml->channel->item[$i]; $picture_url = $entry->enclosure['url']; $time = $entry->pubDate; $time_ln = strlen($time)-14; $time = substr($time,0,$time_ln); $description = $entry->description; $tn_beg = strpos($description, "src="); $tn_end = strpos($description, "alt="); $tn_length = $tn_end - $tn_beg; $tn = substr($description, $tn_beg, $tn_length); $tn_small = str_replace("s288","s128",$tn); $picture_url = $tn; $picture_beg = strpos($picture_url,"http:"); $picture_len = strlen($picture_url)-7; $picture_url = substr($tn, $picture_beg, $picture_len); $picture_url = str_replace("s288","s640",$picture_url); print '<a rel="lightbox[group]" href="'.$picture_url.'">'; print '<img '.$tn_small.' style="border:1px solid #02293a"><br>'; print '</a></td> '; if($column == 4){ print '</tr><tr>'; $column = 0;} else $column++; } print '</table>'; print '<br><center><a href="media">Return to album</a></center>'; } else { //GENERATE ALBUMS $sxml = simplexml_load_file($feedURL); $column = 0; $album_count = count($sxml->channel->item); //print '<h2>'.$galleryTitle.'</h2>'; print '<table cellspacing="0" cellpadding="0" style="font-size:10pt" width="100%"><tr>'; for($i = 0; $i < $album_count; $i++) { $entry = $sxml->channel->item[$i]; $time = $entry->pubDate; $time_ln = strlen($time)-14; $time = substr($time,0,$time_ln); $description = $entry->description; $tn_beg = strpos($description, "src="); $tn_end = strpos($description, "alt="); $tn_length = $tn_end - $tn_beg; $tn = substr($description, $tn_beg, $tn_length); $albumrss = $entry->guid; $albumrsscount = strlen($albumrss) - 7; $albumrss = substr($albumrss, 7, $albumrsscount); $search = strstr($time, $year); if($search != FALSE || $year == ''){ print '<td valign="top">'; print '<a href="/node/'.$photoNodeNum.'?album='.$albumrss.'">'; print '<center><img '.$tn.' style="border:3px double #cccccc"><br>'; print $entry->title.'<br>'.$time.'</center>'; print '</a><br></td> '; if($column == 3){ print '</tr><tr>'; $column = 0; } else { $column++; } } } print '</table>'; } ?>

    Read the article

  • In Java Concurrency In Practice by Brian Goetz, why is the Memoizer class not annotated with @ThreadSafe?

    - by dig_dug
    Java Concurrency In Practice by Brian Goetz provides an example of a efficient scalable cache for concurrent use. The final version of the example showing the implementation for class Memoizer (pg 108) shows such a cache. I am wondering why the class is not annotated with @ThreadSafe? The client, class Factorizer, of the cache is properly annotated with @ThreadSafe. The appendix states that if a class is not annotated with either @ThreadSafe or @Immutable that it should be assumed that it isn't thread safe. Memoizer seems thread-safe though. Here is the code for Memoizer: public class Memoizer<A, V> implements Computable<A, V> { private final ConcurrentMap<A, Future<V>> cache = new ConcurrentHashMap<A, Future<V>>(); private final Computable<A, V> c; public Memoizer(Computable<A, V> c) { this.c = c; } public V compute(final A arg) throws InterruptedException { while (true) { Future<V> f = cache.get(arg); if (f == null) { Callable<V> eval = new Callable<V>() { public V call() throws InterruptedException { return c.compute(arg); } }; FutureTask<V> ft = new FutureTask<V>(eval); f = cache.putIfAbsent(arg, ft); if (f == null) { f = ft; ft.run(); } } try { return f.get(); } catch (CancellationException e) { cache.remove(arg, f); } catch (ExecutionException e) { throw launderThrowable(e.getCause()); } } } }

    Read the article

  • Disable caching in JPA (eclipselink)

    - by James
    Hi, I want to use JPA (eclipselink) to get data from my database. The database is changed by a number of other sources and I therefore want to go back to the database for every find I execute. I have read a number of posts on disabling the cache but this does not seem to be working. Any ideas? I am trying to execute the following code: EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default"); EntityManager em = entityManagerFactory.createEntityManager(); MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); System.out.println(one==two); one==two is true while I want it to be false. I have tried adding each/all the following to my persistence.xml <property name="eclipselink.cache.shared.default" value="false"/> <property name="eclipselink.cache.size.default" value="0"/> <property name="eclipselink.cache.type.default" value="None"/> I have also tried adding the @Cache annotation to the Entity itself: @Cache( type=CacheType.NONE, // Cache nothing expiry=0, alwaysRefresh=true ) Am I misunderstanding something? Thank you, James

    Read the article

  • Disable chaching in JPA (eclipselink)

    - by James
    Hi, I want to use JPA (eclipselink) to get data from my database. The database is changed by a number of other sources and I therefore want to go back to the database for every find I execute. I have read a number of posts on disabling the cache but this does not seem to be working. Any ideas? I am trying to execute the following code: EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default"); EntityManager em = entityManagerFactory.createEntityManager(); MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); System.out.println(one==two); one==two is true while I want it to be false. I have tried adding each/all the following to my persistence.xml <property name="eclipselink.cache.shared.default" value="false"/> <property name="eclipselink.cache.size.default" value="0"/> <property name="eclipselink.cache.type.default" value="None"/> I have also tried adding the @Cache annotation to the Entity itself: @Cache( type=CacheType.NONE, // Cache nothing expiry=0, alwaysRefresh=true ) Am I misunderstanding something? Thank you, James

    Read the article

  • using the ASP.NET Caching API via method annotations in C#

    - by craigmoliver
    In C#, is it possible to decorate a method with an annotation to populate the cache object with the return value of the method? Currently I'm using the following class to cache data objects: public class SiteCache { // 7 days + 6 hours (offset to avoid repeats peak time) private const int KeepForHours = 174; public static void Set(string cacheKey, Object o) { if (o != null) HttpContext.Current.Cache.Insert(cacheKey, o, null, DateTime.Now.AddHours(KeepForHours), TimeSpan.Zero); } public static object Get(string cacheKey) { return HttpContext.Current.Cache[cacheKey]; } public static void Clear(string sKey) { HttpContext.Current.Cache.Remove(sKey); } public static void Clear() { foreach (DictionaryEntry item in HttpContext.Current.Cache) { Clear(item.Key.ToString()); } } } In methods I want to cache I do this: [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var ck = string.Format("SiteSettings_SelectOne_Name-Name_{0}-", Name.ToLower()); var dt = (DataTable)SiteCache.Get(ck); if (dt == null) { dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); SiteCache.Set(ck, dt); } var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; } Is it possible to separate those concerns like so: (notice the new annotation) [CacheReturnValue] [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; }

    Read the article

  • Caching page by parts; how to pass variables calculated in cached parts into never-cached parts?

    - by Kirzilla
    Hello, Let's imagine that I have a code like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); ob_start(); echo 'Here is the cached item id: '.$item_id; $data = ob_get_contents(); ob_end_clean(); $cache->save($data, "part1_cache_id"); } echo $data; echo never_cache_function($item_id); if (!$data_2 = $cache->load("part2_cache_id")) { ob_start(); echo 'Here is the another cached part of the page...'; $data_2 = ob_get_contents(); ob_end_clean(); $cache->save("part2_cache_id"); } echo $data_2; As far as you can see I need to pass $item_id variable into never_cache_function, but if fist part is cached (part1_cache_id) then I have no way to get $item_id value. I see the only solution - serialize all data from fist part (including $item_id value); then cache serialized string and unserialize it everytime when script is executed... Something like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); $data['item_id'] = $item_id; ob_start(); echo 'Here is the cached item id: '.$item_id; $data['html'] = ob_get_contents(); ob_end_clean(); $cache->save( serialize($data), "part1_cache_id" ); } $data = unserialize($data); echo $data['html'] echo never_cache_function($data['item_id']); Is there any other ways for doing such trick? I'm looking for the most high performance solution. Thank you UPDATED And another question is - how to implement such caching into controller without separating page into two templates? Is it possible? PS: Please, do not suggest Smarty, I'm really interested in implementing custom caching.

    Read the article

  • How to get the actual address of a pointer in C?

    - by Airjoe
    BACKGROUND: I'm writing a single level cache simulator in C for a homework assignment, and I've been given code that I must work from. In our discussions of the cache, we were told that the way a small cache can hold large addresses is by splitting the large address into the position in the cache and an identifying tag. That is, if you had an 8 slot cache but wanted to store something with address larger than 8, you take the 3 (because 2^3=8) rightmost bits and put the data in that position; so if you had address 22 for example, binary 10110, you would take those 3 rightmost bits 110, which is decimal 5, and put it in slot 5 of the cache. You would also store in this position the tag, which is the remaining bits 10. One function, cache_load, takes a single argument, and integer pointer. So effectively, I'm being given this int* addr which is an actual address and points to some value. In order to store this value in the cache, I need to split the addr. However, the compiler doesn't like when I try to work with the pointer directly. So, for example, I try to get the position by doing: npos=addr%num_slots The compiler gets angry and gives me errors. I tried casting to an int, but this actually got me the value that the pointer was pointing to, not the numerical address itself. Any help is appreciated, thanks!

    Read the article

  • Rackspace Cloud rewrite jpg causes Session reset

    - by willoller
    This may be the .Net version of this question. I have an image script with the following: ... Response.WriteFile(filename); Response.End(); I am rewriting .jpg files using the following rewrite rule in web.config: <rule name="Image Redirect" stopProcessing="true"> <match url="^product-images/(.*).jpg" /> <conditions> <add input="{REQUEST_URI}" pattern="\.(jp?g|JP?G)$" /> </conditions> <action type="Redirect" redirectType="SeeOther" url="/product-images/ProductImage.aspx?path=product-images/{tolower:{R:1}}.jpg" /> </rule> It basically just rewrites the image path into a query parameter. The problem is that (intermittently of course) Mosso returns a new Asp Session cookie which breaks the whole world. Directly accessing a static .jpg file does not cause this problem. Directly accessing the image script does not cause it either. Only rewriting a .jpg file to the .aspx script causes the Session loss. Things I have tried (From the Rackspace doc How can I bypass the cache?) I added Private cacheability to the image script itself: Response.Cache.SetCacheability(HttpCacheability.Private); I tried adding these cache-disabling nodes to web.config: <staticContent> <clientCache cacheControlMode="DisableCache" /> </staticContent> and <httpProtocol> <customHeaders> <add name="Cache-Control private" value="Cache-Control private" </customHeaders> </httpProtocol> The Solution I need The browser cache cannot be disabled. This means potential solutions involving Cache.SetNoStore() or HttpCacheability.NoCache will not work.

    Read the article

  • using xpath get the correct value in selectName

    - by Sangeeta
    Below I have written the code. I need help to get the right value in selectName. I am new to XPath. Basically with this code I am trying to achieve if employeeName = Chris I need to return 23549 to the calling function. Please help. Part of the code: public static string getEmployeeeID(string employeeName) { Cache cache = HttpContext.Current.Cache; string cacheNameEmployee = employeeName + "Tech"; if (cache[cacheNameEpm] == null) { XPathDocument doc = new XPathDocument(HttpContext.Current.Request.MapPath("inc/xml/" + SiteManager.Site.ToString() + "TechToolTip.xml")); XPathNavigator navigator = doc.CreateNavigator(); string selectName = "/Technologies/Technology[FieldName='" + fieldName + "']/EpmId"; XPathNodeIterator nodes = navigator.Select(selectName); if (nodes.Count > 0) { if (nodes.Current.Name.Equals("FieldName")) //nodes.Current.MoveToParent(); //nodes.Current.MoveToParent(); //nodes.Current.MoveToChild("//EpmId"); cache.Add(cacheNameEpm, nodes.Current.Value, null, DateTime.Now + new TimeSpan(4, 0, 0), System.Web.Caching.Cache.NoSlidingExpiration, CacheItemPriority.Default, null); } } return cache[cacheNameEpm] as string; } XML file <?xml version="1.0" encoding="utf-8" ?> <Employees> <Employee> <EName>Chris</EName> <ID>23556</ID> </Employee> <Employee> <EName>CBailey</EName> <ID>22222</ID> </Employee> <Employee> <EName>Meghan</EName> <ID>12345</ID> </Employee> </Employees> PLEASE NOTE:This XML file has 100 nodes. I just put 1 to indicate the structure.

    Read the article

  • A member variable's hashCode() value is different

    - by Jacques René Mesrine
    There's a piece of code that looks like this. The problem is that during bootup, 2 initialization takes place. (1) Some method does a reflection on ForumRepository & performs a newInstance() purely to invoke #setCacheEngine. (2) Another method following that invokes #start(). I am noticing that the hashCode of the #cache member variable is different sometimes in some weird scenarios. Since only 1 piece of code invokes #setCacheEngine, how can the hashCode change during runtime (I am assuming that a different instance will have a different hashCode). Is there a bug here somewhere ? public class ForumRepository implements Cacheable { private static CacheEngine cache; private static ForumRepository instance; public void setCacheEngine(CacheEngine engine) { cache = engine; } public synchronized static void start() { instance = new ForumRepository(); } public synchronized static void addForum( ... ) { cache.add( .. ); System.out.println( cache.hashCode() ); // snipped } public synchronized static void getForum( ... ) { ... cache.get( .. ); System.out.println( cache.hashCode() ); // snipped } }

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Content not being compressed even though I'm using zlib in php.ini

    - by Tola Odejayi
    I've edited my php.ini file so that it has these two entries: zlib.output_compression = On zlib.output_compression_level = 4 However, after restarting apache, when I request php pages, the headers returned in the response indicate that my server is still NOT serving compressed pages (here are selected headers as viewed using Chrome's Network feature): Cache-Control:no-cache, must-revalidate, max-age=0 Connection:Keep-Alive Content-Type:text/html; charset=UTF-8 Date:Mon, 17 Sep 2012 23:46:13 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Mon, 17 Sep 2012 23:46:13 GMT Pragma:no-cache Proxy-Connection:Keep-Alive Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Via:1.1 XXX-PRXY-07 X-Powered-By:PHP/5.2.17 What might I be doing wrong? Is there any other setting that I need to change? EDIT Here is another set of headers returned to another computer: Cache-Control:no-cache, must-revalidate, max-age=0 Connection:close Content-Type:text/html; charset=UTF-8 Date:Thu, 20 Sep 2012 09:45:26 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Thu, 20 Sep 2012 09:45:26 GMT Pragma:no-cache Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Vary:Cookie X-Powered-By:PHP/5.2.17

    Read the article

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >