Search Results

Search found 17984 results on 720 pages for 'log shipping'.

Page 243/720 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Coordinating the logging output from a web app installed in a SharePoint farm.

    - by Kelly French
    We are deploying web parts to SharePoint 2007 and would like to include logging (log4net). The ideal solution would be to use a database appender to avoid the problems with knowing which actual server is executing the web part. This questions has been helpful: http://stackoverflow.com/questions/219668/sharepoint-and-log4net. I've got log4net working in a stand-alone web app using Visual Studio dev server using the web.config for the log4net settings and a file appender for the output. I'd like to transition to SharePoint and still use the log file output so I can make sure it's all working first, then change the config around to log to a database. Is this going to be too much trouble? How have other developers added log4.net into their solutions for SharePoint?

    Read the article

  • Logging in with WebFinger and OpenID

    - by Ryan
    I would like to apologize in advance for the ugly formatting. In order to talk about the problem, I need to be posting a bunch of URLs, but the excessive URLs and my lack of reputation makes StackOverflow think I could be a spammer. Any instance of 'ht~tp' is supposed to be 'http'. '{dot}' is supposed to be '.' and '{colon}' is supposed to be ':'. Also, my lack of reputation has prevented me from tagging my question with 'webfinger' and 'google-profiles'. Onto my question: I am messing around with WebFinger and trying to create a small rails app that enables a user to log in using nothing but their WebFinger account. I can succesfully finger myself, and I get back an XRD file with the following snippet: Link rel="ht~tp://specs{dot}openid{dot}net/auth/2.0/provider" href="ht~tp://www{dot}google{dot}com/profiles/{redacted}"/ Which, to me, reads, "I have an OpenID 2.0 login at the url: ht~tp://www{dot}google{dot}com/profiles/{redacted}". But when I try to use that URL to log in, I get the following error OpenID::DiscoveryFailure (Failed to fetch identity URL ht~tp://www{dot}google{dot}com/profiles/{redacted} : Error encountered in redirect from ht~tp://www{dot}google{dot}com/profiles/{redacted}: Error fetching /profiles/{Redacted}: Connection refused - connect(2)): When I replace the profile URL with 'ht~tps://www{dot}google{dot}com/accounts/o8/id', the login works perfectly. here is the code that I am using (I'm using RedFinger as a plugin, and JanRain's ruby-openid, installed without the gem) require "openid" require 'openid/store/filesystem.rb' class SessionsController < ApplicationController def new @session = Session.new #render a textbox requesting a webfinger address, and a submit button end def create ####################### # # Pay Attention to this section right here # ####################### #use given webfinger address to retrieve openid login finger = Redfinger.finger(params[:session][:webfinger_address]) openid_url = finger.open_id.first.to_s #openid_url is now: ht~tp://www{dot}google{dot}com/profiles/{redacted} #Get needed info about the acquired OpenID login file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.begin(openid_url) #ERROR HAPPENS HERE #send user to OpenID login for verification redirect_to response.redirect_url('ht~tp://localhost{colon}3000/','ht~tp://localhost{colon}3000/sessions/complete') end def complete #interpret return parameters file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.complete params case response.status when OpenID::SUCCESS session[:openid] = response.identity_url #redirect somehwere here end end end Is it possible for me to use the URL I received from my WebFinger to log in with OpenID?

    Read the article

  • C++ Project compiles as static lib, fails (linker error) as dynamic lib. why??

    - by Roey
    Hi All. I've a VS2008 native C++ project, that I wish to compile as a DLL. It only references one external library (log4cplus.lib), and uses its functions. (also uses log4cplus's .h files , naturally). When I try to compile my project as a static library, it succeeeds. When I try as DLL, it fails : 1>MessageWriter.obj : error LNK2019: unresolved external symbol "public: static class log4cplus::Logger __cdecl log4cplus::Logger::getInstance(class std::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> > const &)" (?getInstance@Logger@log4cplus@@SA?AV12@ABV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@@Z) referenced in function "class log4cplus::Logger __cdecl Log(void)" (?Log@@YA?AVLogger@log4cplus@@XZ) There are 4 more errors just like this related to functions within log4cplus.lib. It seems like something really stupid.. please help me :) Thanks!

    Read the article

  • Configuration Log4j: ConsoleAppender to System.err?

    - by Gerard
    I saw that one of our tools uses a ConsoleAppender to System.err next to System.out in it's log4j configuration. Fragments of the configuration: <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <!-- Log to STDOUT. --> <param name="Target" value="System.out"/> .... <appender name="CONSOLE_ERR" class="org.apache.log4j.ConsoleAppender"> <!-- Log to STDERR. --> <param name="Target" value="System.err"/> In Eclipse this results in double messages to the console, so I believe that is of no use, right? On the Linux server I see only one message to the PuTTY console, so where would that System.err message go to?

    Read the article

  • Best solution for reporting database

    - by zzyzx
    Here is the situation: There is a transaction intensive database - used for both routine transactions and reports. I was wondering if I could isolate these two operations and 2 independent databases, so reports could run off of one database and all the transactions could occur in another one. This would improve performance for the OLTP SQL database. I have gone over a few options like, Mirroring, Log shipping, Replication, Snapshots, Clustering - but would like to discuss the best possible strategy for the desired result. Please advise the best solution to implement this strategy, or any other thoughts/suggestion you may have.

    Read the article

  • Calling UIGetScreenImage() on manually-spawned thread prints "_NSAutoreleaseNoPool():" message to lo

    - by jtrim
    This is the body of the selector that is specified in NSThread +detachNewThreadSelector:(SEL)aSelector toTarget:(id)aTarget withObject:(id)anArgument NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; while (doIt) { if (doItForSure) { NSLog(@"checking"); doItForSure = NO; (void)gettimeofday(&start, NULL); /* do some stuff */ // the next line prints "_NSAutoreleaseNoPool():" message to the log CGImageRef screenImage = UIGetScreenImage(); /* do some other stuff */ (void)gettimeofday(&end, NULL); elapsed = ((double)(end.tv_sec) + (double)(end.tv_usec) / 1000000) - ((double)(start.tv_sec) + (double)(start.tv_usec) / 1000000); NSLog(@"Time elapsed: %e", elapsed); [pool drain]; } } [pool release]; Even with the autorelease pool present, I get this printed to the log when I call UIGetScreenImage(): 2010-05-03 11:39:04.588 ProjectName[763:5903] *** _NSAutoreleaseNoPool(): Object 0x15a2e0 of class NSCFNumber autoreleased with no pool in place - just leaking Has anyone else seen this with UIGetScreenImage() on a separate thread?

    Read the article

  • ajax response byte size

    - by Alex Pacurar
    Im using jQuery's getJSONP and I want to log the duration of the call and the size of the response to be able to have some statistics about the usage of my application. This is a cross domain ajax call, so I need to use JSONP, but as the JSONP call is not done with an XMLHttpRequest object, the complete callback from jquery's ajax doesnt pass the response content. So my question is how to get the response size (content lenght) from a JSONP call. $.ajaxSetup( { complete:function(x,e) { log(x.responseText.length, x.responseText); } } here x is a XMLHttpRequest object for a JSON call , but for JSONP call is undefined.

    Read the article

  • Android RandomAccessFile usage from resource

    - by lacas
    my code is String fileIn = resources.getResourceName(resourceID); Log.e("fileIn", fileIn); //BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); RandomAccessFile buffer = null; try { buffer = new RandomAccessFile(fileIn, "r"); } catch (FileNotFoundException e) { Log.e("err", ""+e); } /fileIn(6062): ls3d.gold.paper:raw/wwe_obj i get 11-26 15:06:35.027: ERROR/err(6062): java.io.FileNotFoundException: /ls3d.gold.paper:raw/wwe_obj (No such file or directory) How can I access a file using randomaccessfile in java? How can I load from a resource? (R.raw.wwe_obj)

    Read the article

  • Seam cache provider with ehcache null

    - by Cateno Viglio
    Hi every one, I'm trying to configure seam/ehcache following the tutorial from jboss page: http://docs.jboss.org/seam/2.1.2/reference/en-US/html/cache.html I put the ehcache.1.2.3.jar in project.ear/lib and injected CacheProvider as especified, but the CacheProvider always return null. The documentation doesn't show any aditional configuration for ehcache, just for jboss cache. I am probably doing something wrong, it's impossible be so easy :). besides put the jar in /lib, i created the following seam component to test: @Scope(ScopeType.SESSION) @Name("cacheBean") public class CacheSeamBean implements java.io.Serializable { @In(required=false, create=true) private EntityManager em; @Logger private Log log; @In private Events events; @In CacheProvider cacheProvider; Boolean blLoaded = Boolean.FALSE; @Create public void buscar() { if (!blLoaded){ List<Parametro> lstParametro = em.createQuery("select p from Parametro p").getResultList(); for (Parametro parametro : lstParametro){ cacheProvider.put(parametro.getCodigo(), parametro.getValor()); } blLoaded= Boolean.TRUE; } } } Thanks

    Read the article

  • SQL Query to delete oldest rows over a certain row count?

    - by Casey
    I have a table that contains log entries for a program I'm writing. I'm looking for ideas on an SQL query (I'm using SQL Server Express 2005) that will keep the newest X number of records, and delete the rest. I have a datetime column that is a timestamp for the log entry. I figure something like the following would work, but I'm not sure of the performance with the IN clause for larger numbers of records. Performance isn't critical, but I might as well do the best I can the first time. DELETE FROM MyTable WHERE PrimaryKey NOT IN (SELECT TOP 10,000 PrimaryKey FROM MyTable ORDER BY TimeStamp DESC)

    Read the article

  • ASP.NET 2.0 files work in one folder, but NOT in another

    - by Steve
    I am about to leap of a building. I have created an app for a client and all the files are in a folder on their D drive. Now it is time to go production, so I copied all my files and folders to their excisting classic asp folder on the same drive. BUT NOTHING WORKS. The only difference I can see is that DEV does not require HTTPS like the production site. I also made sure all the permissions are the same on both folder. I made sure that the GAC has read rights using the aspnet_regiis tool. I am at the end of my debug knowlegde, could someone please help me out. Here are the error messages I get from the application event log. Failed to initialize the AppDomain:/LM/W3SVC/3/Root Exception: System.Configuration.ConfigurationErrorsException Message: Exception of type 'System.Configuration.ConfigurationErrorsException' was thrown. StackTrace: at System.Web.Configuration.ErrorRuntimeConfig.ErrorConfigRecord.System.Configuration.Internal.IInternalConfigRecord.GetLkgSection(String configKey) at System.Web.Configuration.RuntimeConfigLKG.GetSectionObject(String sectionName) at System.Web.Configuration.RuntimeConfig.GetSection(String sectionName, Type type, ResultsIndex index) at System.Web.Configuration.RuntimeConfig.get_HostingEnvironment() at System.Web.Hosting.HostingEnvironment.StartMonitoringForIdleTimeout() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironment(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters) at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironmentAndReportErrors(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. ------------------------ Failed to execute the request because the ASP.NET process identity does not have read permissions to the global assembly cache. Error: 0x80131902 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. ------------------------- aspnet_wp.exe (PID: 4568) stopped unexpectedly. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is the web error message: Server Application Unavailable The web application you are attempting to access on this web server is currently unavailable. Please hit the "Refresh" button in your web browser to retry your request. Administrator Note: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. Thank you for your help, Steve

    Read the article

  • Man machine interface command syntax and parsing

    - by idimba
    What I want is to add possibility to interact with application, and be able to extract information from application or event ask it to change some states. For that purpose I though of building cli utility. The utility will connect to the application and send user commands (one line strings) to the application and wait for response from the application. The command should contain: - command name (e.g. display-session-table/set-log-level etc.) - optionally command may have several arguments (e.g. log-level=10) The question to choose syntax and to learn parse it fast and correctly. I don't want to reinvent the whell, so maybe there's already an answer out there.

    Read the article

  • Concatenating rows from different tables into one field

    - by Markus
    Hi! In a project using a MSSQL 2005 Database we are required to log all data manipulating actions in a logging table. One field in that table is supposed to contain the row before it was changed. We have a lot of tables so I was trying to write a stored procedure that would gather up all the fields in one row of a table that was given to it, concatenate them somehow and then write a new log entry with that information. I already tried using FOR XML PATH and it worked, but the client doesn't like the XML notation, they want a csv field. Here's what I had with FOR XML PATH: DECLARE @foo varchar(max); SET @foo = (SELECT * FROM table WHERE id = 5775 FOR XML PATH('')); The values for "table", "id" and the actual id (here: 5775) would later be passed in via the call to the stored procedure. Is there any way to do this without getting XML notation and without knowing in advance which fields are going to be returned by the SELECT statement?

    Read the article

  • GDB skips over my code!

    - by sheepsimulator
    So, I've defined a class like DataLoggingSystemStateReceiver { DataLoggingSystemStateReceiver() { // stuff } // ... other functions here }; In main, I instantiate DataLoggingSystemStateReceiver like so: int main() { // ... run stuff Sensor sensor(port, timer); DataLoggingSystemStateReceiver dlss(); Log::notice("started"); return 0; } However, when I step through this code in gdb, it runs: Sensor sensor(port, timer); skips DataLoggingSystemStateReceiver dlss(); and continues with Log::notice("started"); What gives? EDIT: By changing DataLoggingSystemStateReceiver dlss(); to DataLoggingSystemStateReceiver dlss; in main(), the line executes. Can someone explain why?

    Read the article

  • What CPAN module can summarize error logs?

    - by mithaldu
    I'm maintaining some website code that will soon dump all its errors and warnings into a log file. In order to make this a bit more pro-active i plan to parse this log file daily, summarize the warnings and errors (i.e. count the occurrence of each specific one and group by either warning/error) and then email this to the devs on the project. This would likely admittedly be rather trivial with a hash and some further fiddling, I wondered if there is a suitable module on CPAN that i could use to do this task. It would either be one that summarizes specifically perl error/warnings logs or one that summarizes arbitrary text files. Any suggestions?

    Read the article

  • get all domain names on network

    - by user175084
    i need to get the list of domain names on my network... but i am only getting the domain name with which i log into... so for example there are 2 domains "xyz" and "xyz2" but i get only the domain with which i log into.... here is my code: if (!IsPostBack) { StringCollection adDomains = this.GetDomainList(); foreach (string strDomain in adDomains) { DropDownList1.Items.Add(strDomain); } } } private StringCollection GetDomainList() { StringCollection domainList = new StringCollection(); try { DirectoryEntry en = new DirectoryEntry("LDAP://"); // Search for objectCategory type "Domain" DirectorySearcher srch = new DirectorySearcher("objectCategory=Domain"); SearchResultCollection coll = srch.FindAll(); // Enumerate over each returned domain. foreach (SearchResult rs in coll) { ResultPropertyCollection resultPropColl = rs.Properties; foreach (object domainName in resultPropColl["name"]) { domainList.Add(domainName.ToString()); } } } catch (Exception ex) { Trace.Write(ex.Message); } return domainList; }

    Read the article

  • Git ignore file for vb.net projects

    - by John C
    Placing a vb.net project under git control in windows (was previously under VSS - long sad story of repository corruption, etc). How should I set up the ignore file? The exclusions I'm thinking of using are: *.exe *.pdb *.manifest *.xml *.log (is git case sensitive on windows? Should I exclude *.Log as well?) *.scc (I gather these were left over from VSS - maybe I should delete them?) Is this a sensible list? Should I be excluding directories?

    Read the article

  • XMLHttpRequest fails in observer method

    - by Michael
    I'm developing a Firefox extension and this code belongs to javascript module. var ajax = Cc["@mozilla.org/xmlextras/xmlhttprequest;1"].createInstance(); ajax.open('GET', "http://www.google.com", true); ajax.onload = function () { Reader.log("Got News"); }; ajax.onerror = function () { Reader.log("Got Error"); }; ajax.send(null); This small code always fails (onerror) if calling from observe method invoked by preference "@mozilla.org/preferences-service;1" Anyone knows how to make this work in observe method?

    Read the article

  • Inaccurate Logarithm in Python

    - by Avihu Turzion
    I work daily with Python 2.4 at my company. I used the versatile logarithm function 'log' from the standard math library, and when I entered log(2**31, 2) it returned 31.000000000000004, which struck me as a bit odd. I did the same thing with other powers of 2, and it worked perfectly. I ran 'log10(2**31) / log10(2)' and I got a round 31.0 I tried running the same original function in Python 3.0.1, assuming that it was fixed in a more advanced version. Why does this happen? Is it possible that there are some inaccuracies in mathematical functions in Python?

    Read the article

  • How to view Mercurial changeset changes using a GUI diff tool

    - by Marcus
    We use both Examdiff and Kdiff3 to view Mercurial changes. Just add this to .hgrc: [extdiff] cmd.kdiff3 = cmd.examdiff = C:\Program Files\ExamDiff Pro\ExamDiff.exe And then you can type hg examdiff or hg diff3 to see a complete diff of all your changes. What I would like is to do the same to see a "before and after" of files for a given changeset that was checked in by someone else. I know you can type hg log to see all changesets and then hg log -vprXX to see a text diff, but that's too hard for my GUI preferring eyes. Anyone know how to the equivalent with the GUI tools?

    Read the article

  • Why is the event object different coming from jquery bind vs. addEventListener

    - by yodaisgreen
    Why is it when I use the jQuery bind the event object I get back is different from the event object I get back using addEventListener? The event object resulting from this jQuery bind does not have the targetTouches array (among other things) but the event from the addEventListener does. Is it me or is something not right here? $(document).ready (function () { $("#test").bind("touchmove", function (event) { console.log(event.targetTouches[0].pageX); // targetTouches is undefined }); }); vs. $(document).ready (function () { var foo = document.querySelectorAll('#test') foo[0].addEventListener('touchmove', function (event) { console.log(event.targetTouches[0].pageX); // returns the correct values }, false); });

    Read the article

  • Emacs Lisp: How to use ad-get-arg and ad-get-args?

    - by RamyenHead
    I'm not sure I am using ad-get-args and ad-get-arg right. For example, the following code doesn't work. (defun my-add (a b) (+ a b)) (defadvice my-add (after my-log-on activate) (message "my-add: %s" (ad-get-args))) (my-add 1 2) The last expression causes an error: Debugger entered--Lisp error: (void-function ad-get-args). The following doesn't work either. (defun my-substract (a b) (- a b)) (defadvice my-substract (around my-log-on activate) (message "my-substract: %s" (ad-get-arg 0)) (ad-do-it)) (my-substract 10 1) The defadvice gives a warning: Warning: `(setq ad-return-value (ad-Orig-my-substract a b))' is a malformed function And the last expression gives an error: Debugger entered--Lisp error: (invalid-function (setq ad-return-value (ad-Orig-my-substract a b))) (setq ad-return-value (ad-Orig-my-substract a b))() I was trying to use defadvice to watch start-process arguments for debugging purposes and I found my way of using ad-get-arg didn't work.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • ASP:LinkButton and Eval

    - by sgibbons
    I'm using an ASP:LinkButton inside of an ItemTemplate inside of a TemplateField in a GridView. For the command argument for the link button I want to pass the ID of the row from the datasource that the gridview is bound to, so I'm doing something like this: <asp:LinkButton ID="viewLogButton" CommandName="viewLog" CommandArgument="<%#Eval("ID")%>" Text="View Log" runat="server"/> Unfortunately, the resulting HTML is this: <asp:LinkButton ID="viewLogButton" CommandName="viewLog" CommandArgument="3" Text="View Log" runat="server"/> It seems that it is parsing the Eval() properly, but this is somehow causing it not to parse the LinkButton tag and just dump it out as literal text. Does anyone know: a) why this is happening and, b) what a good solution to this problem is?

    Read the article

  • Where to put my xUnit tests for an F# assembly?

    - by Benjol
    I'm working on my first 'real' F# assembly, and trying to do things right. I've managed to get xUnit working too, but currently my test module is inside the same assembly. This bothers me a bit, because it means I'll be shipping an assembly where nearly half the code (and 80% of the API) is test methods. What is the 'right' way to do this? If I put the tests in another assembly, I think that means I have to expose internals that I'd rather keep private. I know that in C# there is a friend mechanism for tests (if that's the right terminology), is there an equivalent in F#? Alternatively, can anyone point me to an example project where this is being done 'properly'?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >