Search Results

Search found 57672 results on 2307 pages for 'caching application block'.

Page 41/2307 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • What's the best way to monitor a large number of application pools in IIS7?

    - by Kev
    Some background first - We're running IIS 7 on Windows 2008. We're running around 250 websites per server with each site in it's own application pool. I need a way to monitor each application pool for crashes and hangs and to send an email alert if an application pool is unresponsive for more than say 2 minutes. I thought about having a virtual directory mapped into each site with an ASP.NET page that we could poll via our existing monitoring system (HostMonitor). Does anyone else have experience in this area?

    Read the article

  • CQRS applicability when some commands need to block the UI

    - by regularfry
    I am working on an app which I would dearly love to transition from a fairly traditional layered architecture to CQRS, for a number of reasons, not least fo which is that having a robust event log will make adding a couple of feature requests I can see barrelling towards me trivial to accomodate. Now, I have a conceptual problem: of around 40 commands the user can initiate, there are three which the user needs to be sure have successfully completed before the UI lets them do anything else. Everything else fits into the "submit a request, query for success later" model, except for these three commands. How is this handled in CQRS-land? Do I separate the three blocking commands to effectively a third service, so I have Commands, Queries, and BlockingCommands? Do I have a two-stage event processor with an in-request blocking first stage which only gets used for the blocking commands? Does the existence of these three commands mean that the whole idea of applying CQRS is invalid? Should I just pretend they aren't blocking and poll for success in the UI? I'm sure this must come up on other projects, how is it usually handled?

    Read the article

  • new block adding error

    - by ata ur rehman
    g++: error: ./gr_my_swig.cc: No such file or directory g++: fatal error: no input files compilation terminated. make[3]: *** [_gr_my_swig_la-gr_my_swig.lo] Error 1 make[3]: Leaving directory `/home/ataurrehman/gr-my-basic/swig' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/ataurrehman/gr-my-basic/swig' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/ataurrehman/gr-my-basic' make: *** [all] Error 2

    Read the article

  • Can I block social share buttons with Privoxy?

    - by gojira
    A great many pages have these "Like", "Tweet", "G+1", "share" row of buttons all over the place and in each post in threads. Can I block these unwanted context with Privoxy? I am already using Privoxy and it blocks a lot of unwanted content, but still these "social" buttons are all over the place. I want to completely remove these buttons specifically by using Privoxy. I know that it is possible to block using AdBlock LITE and other software, but my question is specific to Privoxy (reason, I want one point to block all unwanted content and it needs to work on devices / softwares which do not have AdBlock LITE, therefore I use Privoxy). -- Software used: Privoxy 3.0.21 under Windows XP

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Can I use GPL software in a commercial application

    - by Petah
    I have 3 questions about the GPL here: If I use GPL software in my application, but don't modify or distribute it, do I have to release my application under the GPL? What if I modify some software that my application uses. Then do I have to release my application under the GPL, or can I just supply the modified software under the GPLs terms. And what if I use GPL software, but don't modify it, can I distribute it with my application? My case in point is, I have a PHP framework which I use the GeSHi library to highlight some output. Because GeSHi is GPL, does my framework have to be GPL? Can I modify GeSHi for particular use cases of my application if I supply the modifications back to the GeSHi maintainers? Can I redistribute my framework with GeSHi?

    Read the article

  • Cloud Application Foundation kit

    - by JuergenKress
    Cloud Application Foundation is the Next-Generation Application Infrastructure and delivers the most complete, best-of-breed platform for developing cloud applications and includes the following products: WebLogic Server, Coherence, WebTier, GlassFish, Oracle Public Cloud, and iAS. A whole kit is available here: Cloud Application Foundation: Technical Positioning Oracle Cloud Strategy with Cloud Application Foundation Cloud Application Foundation CVC Presentation WebLogic Suite Technical with Business Presentation For all whitepapers, please visit the: WebLogic Community Workspace (WebLogic Community membership required). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Cloud Application Foundation kit,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Why is facebook cache buggy?

    - by IAdapter
    I just started using facebook and I see that many times when I add something to my profile and visit it later its not there. I bet the reason is that the page is cached and not updated very often. Is this on purpose or is it a bug? P.S. For example I added the music I like and later I see that I did not add it, but next day when I visit again its there. I saw it in two web-browsers, so its a facebook bug. Does it has something to do with scalability?

    Read the article

  • returning a heap block by reference in c++

    - by basicR
    I was trying to brush up my c++ skills. I got 2 functions: concat_HeapVal() returns the output heap variable by value concat_HeapRef() returns the output heap variable by reference When main() runs it will be on stack,s1 and s2 will be on stack, I pass the value by ref only and in each of the below functions, I create a variable on heap and concat them. When concat_HeapVal() is called it returns me the correct output. When concat_HeapRef() is called it returns me some memory address (wrong output). Why? I use new operator in both the functions. Hence it allocates on heap. So when I return by reference, heap will still be VALID even when my main() stack memory goes out of scope. So it's left to OS to cleanup the memory. Right? string& concat_HeapRef(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return *temp; } string* concat_HeapVal(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return temp; } int main() { string s1,s2; string heapOPRef; string *heapOPVal; cout<<"String Conact Experimentations\n"; cout<<"Enter s-1 : "; cin>>s1; cout<<"Enter s-2 : "; cin>>s2; heapOPRef = concat_HeapRef(s1,s2); heapOPVal = concat_HeapVal(s1,s2); cout<<heapOPRef<<" "<<heapOPVal<<" "<<endl; return -9; }

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • New Endpoint options that enable additional application patterns

    - by kaleidoscope
    The two communication-related capabilities:  a) inter-role communication and b) external endpoints on worker roles enable new application patterns in Windows Azure-hosted services. Inter-role Communication - A common application pattern enabled by this is client-server, where the server could be an application such as a database or a memory cache. External Endpoints on Worker Roles - A common application type enabled by this is a self-hosted Internet-exposed service, such as a custom application server. For further details click on the following link: http://blogs.msdn.com/windowsazure/archive/2009/11/24/new-endpoint-options-enable-additional-application-patterns.aspx   Tinu, O

    Read the article

  • Drupal 7: Documents as a node/block/field

    - by WernerCD
    I'm working on my first Drupal site. I've progressed in learning the basics . I still have a lot to learn tho. Using FileViewer I can load a PDF saved in a field, for view content of various types. I haven't found something that does the same for Word Docs, Excel, PDF, etc. Does anyone know of something that works in Drupal 7 to load documents other than PDF like FileViewer does inside a browser? Or like Scribd does (Scribd is hosted. I am behind a firewall with limited access for users. So I don't want to use a Scribd like service.)

    Read the article

  • Block ip for long time

    - by Tiziano Dan
    This question is about a iptables, I wanna to know how can I block these ip for 1hour and not only a little time.. because they make to many sql requests, I'm using it for block but it's not enough because there's anyway 100k ip who attack then too much requests for sql server. iptables -N SYN-LIMIT iptables -A SYN-LIMIT -m hashlimit --hashlimit 8/second --hashlimit-mode srcip --hashlimit-name SYN-LIMIT -j RETURN iptables -A SYN-LIMIT -j DROP iptables -I INPUT -p tcp --dport 80 --syn -j SYN-LIMIT iptables -I INPUT -p tcp --dport 80 -m connlimit --connlimit-above 6 -j REJECT --reject-with tcp-reset How can I make the same but block IP for long time ? (Not manually !)

    Read the article

  • Privoxy rule to block Facebook spying

    - by bignose
    Recently, my server's Privoxy rules to block Facebook's spying have failed. How can I block current Facebook spying links? Since soon after [the inception of Facebook's so-called “Open Graph” cross-site tracking widgets][1] (those “Like” bugs on numerous websites), I blocked them by using this rule (in user.action) on our site's Privoxy server: { +block-as-image{People-tracking button.} } .facebook.com/(plugins|widgets)/(like|fan).* That worked fine; the spying bugs no longer appeared on any web page. Today I noticed that they're all making it past that filter [edit: no, they're not]. SOLUTION: The proxy was being silently ignored, though this was not obvious in the client. The above rule continues to work fine.

    Read the article

  • Block (or only allow certian) incoming IP addresses on Verizon FIOS Actiontec Router

    - by jmlumpkin
    I opened a few ports to the outside of my home network so I can get into a few of my machines from outside. When checking some logs, I noticed that I was getting scanned on some ports from various other countries. I already moved my port forward to a non-standard port. I would like to be able to block specific IP's (or even subnets) from my Verizon FIOS router. There is a little bit of documentation online, but I can't find anything specific on how to do this. To start, I just want to block a specific IP. But if it is not to hard, I would also like to know how to possibly block a range of IPs. And with the inverse of this - is there a way to allow only certain IPs or range?

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • block access to certain website types

    - by frustrated teacher
    Need to block access to certain website types without listing each URL to block. Students at secondary school are going to porn sites. Need to be able to block all such access without having to list each possible site URL. Having the Content -- Ratings tab set to None for all categories on the ratings files listed on my computers does not prevent access. Unchecking users may access sites with no rating, even with the security settings set to High, still allows the porn sites to come up. If that is checked, then ONLY listed sites can open and students would not be able to do any research via google, for example. I would rather not have to continue checking each computer and blocking sites as they find them.

    Read the article

  • Distributed cache and improvement

    - by philipl
    Have this question from interview: Web Service function given x static HashMap map (singleton created) if (!map.containsKey(x)) { perform some function to retrieve result y map.put(x, y); } return y; The interviewer asked general question such as what is wrong with this distributed cache implementation. Then asked how to improve on it, due to distributed servers will have different cached key pairs in the map. There are simple mistakes to be pointed out about synchronization and key object, but what really startled me was that this guy thinks that moving to database implementation solves the problem that different servers will have different map content, i.e., the situation when value x is not on server A but on server B, therefore redundant data has to be retrieved in server A. Does his thinking make any sense? (As I understand this is the basic cons for distributed cache against database model, seems he does not understand it at all) What is the typical solution for the cache growth issue (weak reference?) and sync issue (do not know which server has the key already cached - use load balancing)? Thanks

    Read the article

  • Is the UX affected negatively by fully cacheable pages?

    - by ChocoDeveloper
    I want to have fully cacheable pages in my websites, but one cannot do that if they contain user-specific data, like the userbar or things in the UI that can change depending on the permissions the user has. So I was thinking whether it was possible to pull everything that is user-specific via ajax, and update the UI accordingly. But I'm worried that this might be annoying for the user, and also it might be difficult to develop. What do you think? Is there a pattern or something I can follow to deal with this?

    Read the article

  • Exchange 2007 automatically adding IP to block list

    - by Tim Anderson
    This puzzled me. We have all mail directed to an ISP's spam filter, then delivered to SBS 2008 Exchange. One of the ISP's IP numbers suddenly appeared in the ES2007 block list, set to expire in 24 hours I think, so emails started bouncing. Quick look through the typically ponderous docs, and I can't see anything that says Exchange will auto-block an IP number, but nobody is admitting to adding it manually and I think it must have done. Anyone know about this or where it is configured? Obviously one could disable block lists completely but I'd like to know exactly why this happened.

    Read the article

  • Random Cache Expiry

    - by mahemoff
    I've been experimenting with random cache expiry times to avoid situations where an individual request forces multiple things to update at once. For example, a web page might include five different components. If each is set to time out in 30 minutes, the user will have a long wait time every 30 minutes. So instead, you set them all to a random time between 15 and 45 minutes to make it likely at most only one component will reload for any given page load. I'm trying to find any research or guidelines on this topic, e.g. optimal variance parameters. I do recall seeing one article about how Google (?) uses this technique, but can't locate it, and there doesn't seem to be much written about the topic.

    Read the article

  • melonJS: Entity and solid block on collision layer

    - by Arthur Halma
    Actually I have my player entity with 64x64 sprite animation and 18x60 hitbox also the map is maded by 16x16 tiles. When my player goes some way he can pass through blocks (but not all of them). For example there are 4 situations: Good (player can't pass the tile with isSolid property on collision layer) Good (player can't pass the tile with isSolid property on collision layer) Bad (player pass the tile with isSolid property on collision layer) Bad (player pass the tile with isSolid property on collision layer) Looks like melonJS checks only corners of hitbox instead of whole rectangle. Can anyone help me in this situation.

    Read the article

  • Exchange 2007 automatically adding IP to block list

    - by Tim Anderson
    This puzzled me. We have all mail directed to an ISP's spam filter, then delivered to SBS 2008 Exchange. One of the ISP's IP numbers suddenly appeared in the ES2007 block list, set to expire in 24 hours I think, so emails started bouncing. Quick look through the typically ponderous docs, and I can't see anything that says Exchange will auto-block an IP number, but nobody is admitting to adding it manually and I think it must have done. Anyone know about this or where it is configured? Obviously one could disable block lists completely but I'd like to know exactly why this happened.

    Read the article

  • Application/Server dependency mapping

    - by David Stratton
    I'm just curious as to whether such as tool exists (free, open source, or commercial but for a reasonable price) before I build it myself. We're looking for a simple solution to simplify taking web apps online and offline when a server is undergoing maintenance. The idea is that we be able to mark a server as unavailable, and then mark all dependent (direct and indirect) as offline. Our first proof-of-concept is running, and we created an aspx page that lists various applications that have an App_Offline.html file with a friendly "Down for Maintenance" message in a GridView. In the GridView, each app has a LinkButton that, when clicked, either renames the App_Offline.htm to App_Offline.html or vice-versa to take the app online and offline. The next step is to set up all of our dependencies. For example, our store locater would be dependent on our web services, which in turn are dependent on our SQL Server. (that's a simple example. We can easily have several layers, or one app dependent on multiple servers, etc.) In this example, if the SQL server goes down, we would need to drill down recursively to find all apps that depend on it, and then turn them off and on by renaming the App_Offline file appropriately. I realize this will be relatively simple to build, but could be complex to manage. I'm sure we're not the first team to think of this concept, and I'm wondering if there are any open source tools, or if any of you have done something similar and can help us avoid pitfalls. Edit - Update I found the category of software I'm looking for. it's called CMDB - (Configuration Management Database), and it's generally more of a Network Admin type tool than a developer tool. I found some open source products in this category, but none written in .NET. I had considered moving this question to ServerFault.com when I realized I was looking for a netowrk Admin type tool, but since I'm looking for code and a modifiable solution I'll keep the question here.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >