Search Results

Search found 7827 results on 314 pages for 'counter cache'.

Page 39/314 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • Conditionally changing MIME type in nginx

    - by Peter
    I'm using nginx as a frontend to Rails. All pages are cached as .html files on disk, and nginx serves these files if they exist. I want to send the correct MIME type for feeds (application/rss+xml), but the way I have so far is quite ugly, and I'm wondering if there is a cleaner way. Here is my config: location ~ /feed/$ { types {} default_type application/rss+xml; root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } location / { root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } My questions: Is there a better way to change the MIME type? All cached files have .html extensions and I cannot change this. Is there a way to factor out the if conditions in /feed/$ and /? I understand that I can use include, but I'm hoping for a better way. Putting part of the config in a different file is not that readable. Can you spot any bugs in the if conditions? I'm using nginx 0.6.32 (Debian Lenny). I prefer to use the version in APT. Thanks.

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Curios: What makes CPUs better than others? [closed]

    - by Zizma
    I have been wondering about this for a long while now and was hoping someone here could answer it pretty easily. If I was looking for the most powerful CPU what should I really be looking at? There are so many different parameters of a CPU and I am wanting to know what each thing does and what really matters. Basically this: What is the deal with cores? If I take using optimized applications out of the mix would it theoretically better to get quad core 1.0GHz CPU or a 1 core 4 GHz CPU? Also, what is the difference between maybe an Sandy Bridge CPU versus an Ivy Bridge CPU? If they both were had the same clock speed and number of cores would the Ivy Bridge perform better? Does an older Xeon with an equal clock speed and number of cores to a new i7 really perform worse/slower? Does size matter? Why would I go with a 22nm CPU over a 32nm when the size difference is so trivial? What about the cache? When does the cache come into play with performance?

    Read the article

  • Offline copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I think it doesn't suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if I can simply cache those I've used. Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. So I think the best way is to have a caching proxy for the Windows share. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • Trying get dynamic content hole-punched through Magento's Full Page Cache

    - by rlflow
    I am using Magento Enterprise 1.10.1.1 and need to get some dynamic content on our product pages. I am inserting the current time in a block to quickly see if it is working, but can't seem to get through full page cache. I have tried a variety of implementations found here: http://tweetorials.tumblr.com/post/10160075026/ee-full-page-cache-hole-punching http://oggettoweb.com/blog/customizations-compatible-magento-full-page-cache/ http://magentophp.blogspot.com/2011/02/magento-enterprise-full-page-caching.html (http://www.exploremagento.com/magento/simple-custom-module.php - custom module) Any solutions, thoughts, comments, advice is welcome. here is my code: app/code/local/Fido/Example/etc/config.xml <?xml version="1.0"?> <config> <modules> <Fido_Example> <version>0.1.0</version> </Fido_Example> </modules> <global> <blocks> <fido_example> <class>Fido_Example_Block</class> </fido_example> </blocks> </global> </config> app/code/local/Fido/Example/etc/cache.xml <?xml version="1.0" encoding="UTF-8"?> <config> <placeholders> <fido_example> <block>fido_example/view</block> <name>example</name> <placeholder>CACHE_TEST</placeholder> <container>Fido_Example_Model_Container_Cachetest</container> <cache_lifetime>86400</cache_lifetime> </fido_example> </placeholders> </config> app/code/local/Fido/Example/Block/View.php <?php /** * Example View block * * @codepool Local * @category Fido * @package Fido_Example * @module Example */ class Fido_Example_Block_View extends Mage_Core_Block_Template { private $message; private $att; protected function createMessage($msg) { $this->message = $msg; } public function receiveMessage() { if($this->message != '') { return $this->message; } else { $this->createMessage('Hello World'); return $this->message; } } protected function _toHtml() { $html = parent::_toHtml(); if($this->att = $this->getMyCustom() && $this->getMyCustom() != '') { $html .= '<br />'.$this->att; } else { $now = date('m-d-Y h:i:s A'); $html .= $now; $html .= '<br />' ; } return $html; } } app/code/local/Fido/Example/Model/Container/Cachetest.php <?php class Fido_Example_Model_Container_Cachetest extends Enterprise_PageCache_Model_Container_Abstract { protected function _getCacheId() { return 'HOMEPAGE_PRODUCTS' . md5($this->_placeholder->getAttribute('cache_id') . $this->_getIdentifier()); } protected function _renderBlock() { $blockClass = $this->_placeholder->getAttribute('block'); $template = $this->_placeholder->getAttribute('template'); $block = new $blockClass; $block->setTemplate($template); return $block->toHtml(); } protected function _saveCache($data, $id, $tags = array(), $lifetime = null) { return false; } } app/design/frontend/enterprise/[mytheme]/template/example/view.phtml <?php /** * Fido view template * * @see Fido_Example_Block_View * */ ?> <div> <?php echo $this->receiveMessage(); ?> </span> </div> snippet from app/design/frontend/enterprise/[mytheme]/layout/catalog.xml <reference name="content"> <block type="catalog/product_view" name="product.info" template="catalog/product/view.phtml"> <block type="fido_example/view" name="product.info.example" as="example" template="example/view.phtml" />

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • How do I stop Gimp from autolaunching on startup?

    - by Joshua Fox
    Gimp launches every time I log into Xubuntu (v. 13.10). Gimp is not shown under Settings Manager- Sessions and startup. It does not appear in ~/.config/autostart. I immediately close Gimp in these cases, so it is not running when I shut down the session. How do I stop Gimp from autolaunching on startup? Diagnostic Info: Note that cd / find . -name gimp.desktop Only produces ./usr/share/applications/gimp.desktop and nothing else Here is the output of grep -lIr 'gimp' ~/ ~/Gimp-search-results.txt sbin/vgimportclone home/joshua/.gimp-2.8/controllerrc home/joshua/.gimp-2.8/tags.xml home/joshua/.gimp-2.8/dockrc home/joshua/.gimp-2.8/gimprc home/joshua/.gimp-2.8/themerc home/joshua/.gimp-2.8/templaterc home/joshua/.gimp-2.8/gtkrc home/joshua/.gimp-2.8/sessionrc home/joshua/.gimp-2.8/toolrc home/joshua/.gimp-2.8/pluginrc home/joshua/.gimp-2.8/menurc home/joshua/Gimp-search-results.txt home/joshua/.local/share/ristretto/mime.db home/joshua/.wine/drive_c/windows/system32/gecko/1.4/wine_gecko/dictionaries/en-US.dic home/joshua/.wine/drive_c/windows/syswow64/gecko/1.4/wine_gecko/dictionaries/en-US.dic home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,skype,,0495938f41334883bd3a67d3b164c1d1 home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,gnome-utils,,91bba9b826fb21dbfc3aad6d3bd771cb home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,icedtea-plugin,,7bb5e4ad0469ef8277032c048b9d7328 home/joshua/.cache/software-center/piston-helper/reviews.ubuntu.com,reviews,api,1.0,review-stats,any,any,,1c66e24123164bb80c4253965e29eed7 home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,wine1.4,,2bac05a75dcec604ee91e58027eb4165 home/joshua/.cache/software-center/piston-helper/software-center.ubuntu.com,api,2.0,applications,en,ubuntu,saucy,amd64,,32b432ef7e12661055c87e3ea0f3b5d5 home/joshua/.cache/software-center/apthistory.p home/joshua/.cache/software-center/reviews.ubuntu.com_reviews_api_1.0_review-stats-pkgnames.p home/joshua/.cache/oneconf/861c4e30b916e750f16fab5652ed5937/package_list_861c4e30b916e750f16fab5652ed5937 home/joshua/.cache/sessions/xfwm4-23e853443-fb4b-42fd-aa61-33fa99fdc12c.state home/joshua/.cache/sessions/xfce4-session-athena:0 home/joshua/.config/abiword/profile

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Does Visual Studio Localhost ASP.NET debugging allow caching?

    - by Joel
    The title pretty much says it all, but here's the issue. I have an generic handler that returns Javascript; the only line of code that deals with caching that I put in is the following: context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetExpires(DateTime.Now.AddYears(1)); context.Response.ContentType = "text/javascript"; context.Response.Write("result"); I'm debugging on localhost. out of Visual Studio 2008, (that's "localhost." so fiddler picks it up) and Fiddler2 sees the expire date, but says that the cache header is set to private, and the page isn't being cached. Can anybody see what's going wrong here?

    Read the article

  • HTTP MODULE Event Does Not Fire When Click Browser's Back Button

    - by Ali
    I Wrote an Http Module that checks if logged user is restricted disables images on the page. void application_AuthorizeRequest(object sender, EventArgs e) { . . . if (context.User.IsInRole("Restricted")) { context.Response.StatusCode = 401; context.Response.End(); } The code works fine. When the page loads, every image on the screen disapears. but when I go to another page and click back button on the browser and goto previous page images appear. What should I? (I dont want to clear Cache every time) context.Response.Cache.SetNoStore(); context.Response.Cache.SetCacheability(HttpCacheability.NoCache);

    Read the article

  • Cloned Cached item has memory issues

    - by ioWint
    Hi there, we are having values stored in Cache-Enterprise library Caching Block. The accessors of the cached item, which is a List, modify the values. We didnt want the Cached Items to get affected. Hence first we returned a new List(IEnumerator of the CachedItem) This made sure accessors adding and removing items had little effect on the original Cached item. But we found, all the instances of the List we returned to accessors were ALIVE! Object relational Graph showed a relationship between this list and the EnterpriseLibrary.CacheItem. So we changed the return to be a newly cloned List. For this we used a LINQ say (from item in Data select new DataClass(item) ).ToList() even when you do as above, the ORG shows there is a relationship between this list and the CacheItem. Cant we do anything to create a CLONE of the List item which is present in the Enterprise library cache, which DOESNT have ANY relationship with CACHE?!

    Read the article

  • url with question mark considered as new http request?

    - by Navin Leon
    I am optimization my web page by implementing caching, so if I want the browser not to take data from cache, then I will append a dynamic number as query value. eg: google.com?val=823746 But some time, if I want to bring data from cache for the below url, the browser is making a new http request to server, its not taking data from cache. Is that because of the question mark in URL ? eg: http://google.com? Please provide some reference document link. Thanks in advance. Regards, Navin

    Read the article

  • Using IvyDE with different workspaces on different branches

    - by James Woods
    I am having problems using IvyDE when I have different workspaces for different branches. I have "Resolve dependencies in workspace" switched on. But everytime I change to a different workspace I have to remember to manually clean the caches out. This is because IvyDE always uses the default cache for resolving dependencies within a workspace, so when switching between workspaces the cache can be polluted by different versions. It would seem that it is impossible to work with two different workspaces at the same time. I cannot find a way to configure the location that IvyDE uses to cache the project dependencies. It does not appear to use the caches defined in the ivysettings.xml

    Read the article

  • How to use Zend_Cache Identifier ?

    - by ArneRie
    Hi Folks, i think iam getting crazy, iam trying to implement Zend_Cache to cache my database query. I know how it works and how to configure. But i cant find a good way to set the Identifier for the cache entrys. I have an method wich search for records in my database (based on an array with search values). /** * Find Record(s) * Returns one record, or array with objects * * @param array $search Search columns => value * @param integer $limit Limit results * @return array One record , or array with objects */ public function find(array $search, $limit = null) { $identifier = 'NoIdea'; if (!($data = $this->_cache->load($identifier))) { // fetch // save to cache with $identifier.. } But what kind of identifier can use in this situation?

    Read the article

  • String loops in Python

    - by Steve Hunter
    I have two pools of strings and I would like to do a loop over both. For example, if I want to put two labeled apples in one plate I'll write: basket1 = ['apple#1', 'apple#2', 'apple#3', 'apple#4'] for fruit1 in basket1: basket2 = ['apple#1', 'apple#2', 'apple#3', 'apple#4'] for fruit2 in basket2: if fruit1 == fruit2: print 'Oops!' else: print "New Plate = %s and %s" % (fruit1, fruit2) However, I don't want order to matter -- for example I am considering apple#1-apple#2 equivalent to apple#2-apple#1. What's the easiest way to code this? I'm thinking about making a counter in the second loop to track the second basket and not starting from the point-zero in the second loop every time.

    Read the article

  • Is it possible for PHP to generate a fresh page on every Javascript history.go(-1) ?

    - by Ho
    Hello, I have a PHP page (a.php) which is already sending these headers: <?php header('Cache-Control: no-cache, no-store, max-age=0, must-revalidate'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Pragma: no-cache'); ?> And on the PHP page (a.php) , it has a link to another page (b.html) on b.html, it has a javascript code to: <script type="text/javascript"> history.go(-1); </scirpt> It seems to me that, when the browser is "going back" to a.php,the content isn't fresh at all. Would you please advise me if generating a completely fresh page on history.go(-1) is possible? Thank you.

    Read the article

  • How to handle EntityExistsException properly?

    - by Ivan Yatskevich
    I have two entities: Question and FavoritesCounter. FavoritesCounter should be created when the question is added to favorites for the first time. Consider a use case when two users tries to add a question to favorites simultaneously - this will cause EntityExistsException when entityManager.persist(counter) is called for the second user. But the code below doesn't work, because when EntityExistsException is thrown, container marks transaction as rollback only and attempt to return getFavoritesCounter(question) fails with javax.resource.ResourceException: Transaction is not active @Stateless public class FavoritesServiceBean implements FavoritesService { ... public void addToFavorites(Question question) { FavoritesCounter counter = getCounter(question); if (counter == null) { counter = createCounter(question); } //increase counter } private FavoritesCounter createCounter(Question question) { try { FavoritesCounter counter = new FavoritesCounter(); counter.setQuestion(question); entityManager.persist(counter); entityManager.flush(); return counter; } catch (EntityExistsException e) { return getFavoritesCounter(question); } } private FavoritesCounter getFavoritesCounter(Question question) { Query counterQuery = entityManager.createQery("SELECT counter FROM FavoritesCounter counter WHERE counter.question = :question"); counterQuery.setParameter("question", question); List<FavoritesCounter> result = counterQuery.getResultList(); if (result.isEmpty()) return null; return result.get(0); } } Question @Entity public class Question implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } FavoritesCounter @Entity public class FavoritesCounter implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @OneToOne @Column(unique = true) private Question question; //getter and setter } What is the best way to handle such a situation - return already created entity after EntityExistsException?

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • iPhone and HTML5 Cache Manifest

    - by Jamey McElveen
    I am trying to build an iPhone web application using ASP.NET. The page is dynamically rendered once for each visitor. At this point the page can be bookmarked and it will never change again for that visitor. For this reason it should be cached locally from that point on so the application will run if referenced from the bookmark even if no network connection is available. No matter what I try the phone continues to request the page from the server forcing a re-render or it fails if the phone is offline. Louis Gerbarg suggested in this post that I use HTML5 Cache Manifest to get this working however following the w3.org docs does not appear to work for the iPhone. Does anyone have a good example where application cache is working?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >