Search Results

Search found 7913 results on 317 pages for 'resource caching'.

Page 101/317 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • iis7 .net webservice 404 error

    - by agilenoob
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. The page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, and I have added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016 FAILED REQUEST LOG: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0)

    Read the article

  • php cache zend framework

    - by msaif
    server side is PHP + zend framework. problem: i have huge of data appox 5000 records and no of columns are 5 in input.txt file. i like to read all data into memory only once and send some data to the every browser request. but if i update that input.txt file then updated data must be auto synchronized to that memory location. so i need to solve that problem by using memory caching technique.but caching technique has expire time.but if input.txt is updated before cache expire then i need to auto synchronize to that memory location. now i am using zend framework 1.10.is it possible in zend framework. can anybody give me some line of code of zendfrmawork i have no option to use memchached server(distributed). Only zend framwork.

    Read the article

  • Impossible to stop raid device

    - by Traroth
    I'm trying to stop a RAID disk in order to replace it by a new one, as this one is not working properly. I'm typing mdadm --stop /dev/md1, and I'm getting an error message: mdadm: fail to stop array /dev/md1: Device or resource busy I'm getting this message even if I reboot the server, and I can't see a process that could cause this. The server is working under a Debian with a 2.6.18-4-amd64 kernel. Could you help? Edit: Mote details about what my colleague tried out. After unmounting sda1, the command mdadm --remove /dev/md0 /dev/sda1 worked. But now, we still have an error message after mdadm --remove /dev/md1 /dev/sda5: mdadm: hot remove failed for /dev/sda5: Device or resource busy I still don't understand completly how the different partitions are mounted, so I suppose there is something I don't understand in the currenct situation...

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • PHP/HTML Strange refresh problem

    - by user270797
    After I upload my PHP files to my web host, I view the page by the URL. So what I usually do as I work on the page is make a change, upload, then refresh the browser to view my changes. What I find is that sometimes I refresh and it shows me a previous version of the page, I hit refresh 5 times and it shows me 5 different versions of the page some old changes some new changes, it makes it really difficult to know which is the latest version. I dont think this is a local caching problem, Ive disabled caching in all browsers, the problem occurs in IE, FF, Chrome I have a feeling it might be the web server. I believe the hosting company uses Zeus (?) web server product.

    Read the article

  • Puppet: Conditional file source based on naming convention

    - by thinice
    I'm getting the ball rolling on puppet for my environment - and I'd like to have a conditional file resource based on whether or not the module itself contains a file based on a naming convention. So visually, assume a module named 'mysql' and it's layout: mysql/ /files /etc/ my.cnf my.hostname1.cnf my.hostname2.cnf /manifests init.pp ... So I'd like the block to verify if the resource for the module exists or not, and take action accordingly, in pseudo-terms: file { '/etc/my.cnf': if -f 'puppet:///mysql/etc/my.$hostname.cnf' { source => 'puppet:///mysql/etc/my.$hostname.cnf' } else { source => 'puppet:///mysql/etc/my.cnf' } } This way one wouldn't have to manage a csv file or the .pp file with a host specific case statement - is this possible?

    Read the article

  • Combining Session and Cache

    - by Zyphrax
    To make my extranet web application even faster/more scalable I think of using some of the caching mechanisms. For some of the pages we'll use HTML caching, please ignore that technique for this question. E.g.: at some point in time 2500 managers will simultaneously login on our application (most of them with the same Account/Project) I think of storing an Account-cachekey and Project-cachekey into the user's Session and use that to get the item from the Cache. I could have simply stored the Account into the session, but that would result in 2500 of the same Accounts in memory. Is there a better solution to this or does it make sense :)?

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • Hyper-V CPU Utilization, Good Tools?

    - by yzorg
    I just learned a ton from this post: Host CPU% doesn't include child VM CPU%, specifically I learned that both the 'host OS' and 'child VM' are siblings within the HyperVisor layer. Are there good utilities for 'watching' the total CPU and other resource counters at the HyperVisor (hardware) layer? I know perfmon (watching special Hyper-V CPU counters) is the standard answer, but I've stayed away from perfmon for ad-hoc monitoring. Is there a good OSS or free tools to 'watch' the resource utilization as I create multiple new VMs running on the server? I'm a developer, so if there aren't any good UI tools to surface this data I'd consider creating one, but only if needed. P.S. My specific scenario is I'm creating new web, SQL and back-end server VMs for new Windows 8 Server and SQL 2012 (entire application stack). I need to monitor them for utilization and know when I need to grow beyond 1 host (I'll need to split the VMs into separate hosts as I hit hardware limits of the 1st host, and diagnose problems).

    Read the article

  • Setup IPv6 over IPv4 tunnel in VPN

    - by bfmeb
    Let me explain my szenario: I have a linux server A. A is reachable in a VPN. So if I am connected to the VPN over Internet I can successfully ping A. Server A is connected to a Router B. Router B has a local ipv6 address and there are resources (each of them with a local ipv6 address) connected to Router B. After I am connected to VPN, I am able to use ssh to have access over A. Now I can use the ping6 command to ping the Router B or one of its connected resources. This works fine. The ping fails if I try to ping router B on my computer. Overview: My Computer -- VPN -- Server A(ipv4) -- Router B(ipv6) -- Ressource A(ipv6) On resource A runs for example a HTTP-Server. My question is: How can I access Resource A (for example with HTTP) on my to VPN connected computer? Is it possible? Should I setup a tunnel device? Sorry for this inexpertly explanation, but I am new to network stuff!

    Read the article

  • Has Object in VB 2010 received the same optimalization as dynamic in C# 4.0?

    - by Abel
    Some people have argued that the C# 4.0 feature introduced with the dynamic keyword is the same as the "everything is an Object" feature of VB. However, any call on a dynamic variable will be translated into a delegate once and from then on, the delegate will be called. In VB, when using Object, no caching is applied and each call on a non-typed method involves a whole lot of under-the-hood reflection, sometimes totaling a whopping 400-fold performance penalty. Have the dynamic type delegate-optimization and caching also been added to the VB untyped method calls, or is VB's untyped Object still so slow?

    Read the article

  • Configuring IIS site to use HTTPS

    - by James
    I am working on a REST API which I have currently deployed on a Win XP Professional SP2 development machine running IIS 5.1. The site is currently being hosted on port 81 and being accessed via HTTP. I would now like to configure the site to stop using HTTP and use HTTPS only. I have developed a self-signed certificate using the SelfSSL.exe tool from the 6.0 Resource Kit Tools and set the Common Name to be the IP of my server (as it's a local development machine it has no domain name). I have also already configured the site to use SSL using the How To Set Up an HTTPS Service in IIS tutorial as my guide. However, whenever I try to access a resource in the API via HTTPS I get a 404. Any ideas?

    Read the article

  • how to get apache mod_cache work with mod_wsgi (django)?

    - by harmv
    I thought i'd speed up my django projects, by letting apache doing some caching for me. Unfortunately I see that apache never caches my dynamic pages. Has mod_cache problems with mod_wsgi served code ? My apache config: <VirtualHost *:80 ServerName myserver.com CacheEnable mem / # for testing only CacheIgnoreQueryString On CacheIgnoreCacheControl On WSGIDaemonProcess aname processes=1 threads=25 WSGIProcessGroup aname Alias /media/ /home/harm/projects/test/media/ WSGIScriptAlias / /home/harm/projects/test/wsgi.py The response does have the correct caching headers: Content-Length 2647 Content-Encoding gzip Vary Accept-Encoding Cache-Control public, max-age=3600 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type application/x-javascript Am I missing something ?

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • How can I change the link format in will_paginate for page_cache in Ruby on Rails?

    - by jaehyun
    I want to use page_cache with will_paginate. There are good information on this page below. http://railsenvy.com/2007/2/28/rails-caching-tutorial#pagination http://railslab.newrelic.com/2009/02/05/episode-5-advanced-page-caching I wrote routes.rb looks like: map.connect '/products/page/:page', :controller => 'products', :action => 'index' But, links of url are not changed to '/products/page/:page' which are in will_paginate helper. They are still 'products?page=2' How can i change url format is in will_paginate?

    Read the article

  • Intermittent HTTP 401 errors

    - by forthrin
    I am using an Intranet solution which requires basic HTTP login. However, there is an intermittent error which requires me to log in again, and then the server says "Forbidden" whether I give the correct login information or not. To add insult to injury, Safari (and Chrome) seems to show the login dialog for every included resource in the HTML, and it's impossible to cancel this modal dialog sequence, so the whole browser is blocked until I've pressed Esc some 30 odd times. After an hour, I may gain access again, without having really done anything. My questions: What could cause temporal 401 errors? Why do the browsers show the login dialog 30 times per page load (assumedly for every included resource in the HTML from the same domain)?

    Read the article

  • How to specify HTTP expiration header? (ASP.NET MVC+IIS)

    - by Marek
    I am already using output caching in my ASP.NET MVC application. Page speed tells me to specify HTTP cache expiration for css and images in the response header. I know that the Response object contains some properties that control cache expiration. I know that these properties can be used to control HTTP caching for response that I am serving from my code: Response.Expires Response.ExpiresAbsolute Response.CacheControl or alternatively Response.AddHeader("Expires", "Thu, 01 Dec 1994 16:00:00 GMT"); The question is how do I set the Expires header for resources that are served automatically, e.g. images, css and such?

    Read the article

  • When to use Hibernate?

    - by Ramo
    Hi All, I was asked in an interview this question so I answered with the following: -Better Performance: - Efficient queries. - 1st and 2nd level caching. - Good caching gives better scalability. - Good Database Portability: - Changing the DB is as easy as changing the dialect configuration. - Increased Developer Productivity: - Think only in object terms not in query language terms. But I also feel that systems fall in one of the below categories, and Hibernate may not be suited for all these cases, I'm interested in your thoughts about this, do you agree with me? please let me know when would use HB in the following case and why. Write Only Systems: Read Only Systems: Write Mostly Systems: Read Mostly Systems: Regards Ramo

    Read the article

  • database row/ record pointers

    - by David
    Hi I don't know the correct words for what I'm trying to find out about and as such having a hard time googling. I want to know whether its possible with databases (technology independent but would be interested to hear whether its possible with Oracle, MySQL and Postgres) to point to specific rows instead of executing my query again. So I might initially execute a query find some rows of interest and then wish to avoid searching for them again by having a list of pointers or some other metadata which indicates the location on a database which I can go to straight away the next time I want those results. I realise there is caching on databases, but I want to keep these "pointers" else where and as such caching doesn't ultimately solve this problem. Is this just an index and I store the index and look up by this? most of my current tables don't have indexes and I don't want the speed decrease that sometimes comes with indexes. So whats the magic term I've been trying to put into google? Cheers

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • Learn Linux Command Line for Web Server Management [closed]

    - by Jonathan
    I've searched high and low for a good resource for learning the Linux command line. I've found a handful of separate resources, but none that really can assist in web server management. I'm currently learning through trial an error with 'man' pages, along with Google. I was just wondering if anyone had a solid resource that they used to learn, and would be willing to share it with me. Thanks so much for your time, I really appreciate it! EDIT: I have a few CentOS servers at current, and I know the basics, I'm just trying to get to a more advanced level.

    Read the article

  • Using rspec to check creation of template

    - by Brian
    I am trying to use rspec with puppet to check the generation of a configuration file from an .erb file. However, I get the error 1) customizations should generate valid logstash.conf Failure/Error: content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] ArgumentError: wrong number of arguments (0 for 1) # ./spec/classes/logstash_spec.rb:29:in `catalogue' # ./spec/classes/logstash_spec.rb:29 And the logstash_spec.rb: describe "customizations" do let(:params) { {:template => "profiles/logstash/output_broker.erb", :options => {'opt_a' => 'value_a' } } } it 'should generate valid logstash.conf' do content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] content.should match('logstash') end end

    Read the article

  • J2EE/EJB + service locator: is it safe to cache EJB Home lookup result ?

    - by Guillaume
    In a J2EE application, we are using EJB2 in weblogic. To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern. But after a few search on the web I found that event if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching. Questions: Is it safe to cache EJB Home lookup result ? What will happen if one my cluster node is no more working ? What will happen if I install a new version of the EJB without refreshing the service locator's cache ?

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >