Search Results

Search found 17616 results on 705 pages for 'uls log'.

Page 165/705 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Cannot right click with synaptics touchpad

    - by fluteflute
    I have a Sony VAIO E14. The touchpad detects all clicks as Left clicks. In Windows 7, pressing on the right side of the touchpad is recognised as a right click. How can I enable right clicking? greg@greg-SVE14A1C5E:~$ xinput ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] ... greg@greg-SVE14A1C5E:~$ grep "TouchPad: buttons:" /var/log/Xorg.0.log [ 23.112] (--) synaptics: SynPS/2 Synaptics TouchPad: buttons: left double triple

    Read the article

  • VirtualHost and 403 Forbidden problem with Apache

    - by Joaquín L. Robles
    Hi people, I have the exactly same problem as in 403 Forbidden Error when accessing enabled virtual host, but I tried the provided solution and still receiving 403 Forbidden, my site is called project, and it's cofiguration file in /etc/apache2/sites-available is the following: <VirtualHost *:80> ServerAdmin webmaster@reweb ServerName www.online.project.com ErrorLog /logs/project-errors.log DocumentRoot /home/joarobles/Zend/workspaces/DefaultWorkspace7/online.project.com.ar/public <Directory /home/joarobles/Zend/workspaces/DefaultWorkspace7/online.project.com.ar> Order Deny,Allow Allow from all Options Indexes </Directory> Tried to change the owner of /home/joarobles/Zend/workspaces/DefaultWorkspace7/online.cobico.com.ar to www:data:www-data with recursion, but still getting this error in project-errors.log: [Mon Jan 31 01:26:01 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/joarobles/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Any idea? Thanks!

    Read the article

  • vsftpd not allowing uploads. 550 response

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • Why does my Perl CGI script raise an internal server error on Apache?

    - by itcplpl
    I've installed apache2 on Ubuntu 11.04, and localhost is working. I created a simple printenv.pl script and put it in the following directory $ mv printenv.pl /usr/lib/cgi-bin/ $ chmod +rx /usr/lib/cgi-bin/printenv.pl However when I go to http://127.0.0.1/cgi-bin/printenv.pl, I get a 500 Internal Server Error I checked the error log at /var/log/apache2, and this is what it says: [Mon Oct 24 11:04:25 2011] [error] (13)Permission denied: exec of '/usr/lib/cgi-bin/printenv.pl' failed [Mon Oct 24 11:04:25 2011] [error] [client 127.0.0.1] Premature end of script headers: printenv.pl Any suggestions on how I can fix this and run CGI scripts on my localhost?

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • video and file caching with squid lusca?

    - by moon
    hello all i have configured squid lusca on ubuntu 11.04 version and also configured the video caching but the problem is the squid cannot configure the video more than 2 min long and the file of size upto 5.xx mbs only. here is my config please guide me how can i cache the long videos and files with squid: > # PORT and Transparent Option http_port 8080 transparent server_http11 on icp_port 0 > > # Cache Directory , modify it according to your system. > # but first create directory in root by mkdir /cache1 > # and then issue this command chown proxy:proxy /cache1 > # [for ubuntu user is proxy, in Fedora user is SQUID] > # I have set 500 MB for caching reserved just for caching , > # adjust it according to your need. > # My recommendation is to have one cache_dir per drive. zzz > > #store_dir_select_algorithm round-robin cache_dir aufs /cache1 500 16 256 cache_replacement_policy heap LFUDA memory_replacement_policy heap > LFUDA > > # If you want to enable DATE time n SQUID Logs,use following emulate_httpd_log on logformat squid %tl %6tr %>a %Ss/%03Hs %<st %rm > %ru %un %Sh/%<A %mt log_fqdn off > > # How much days to keep users access web logs > # You need to rotate your log files with a cron job. For example: > # 0 0 * * * /usr/local/squid/bin/squid -k rotate logfile_rotate 14 debug_options ALL,1 cache_access_log /var/log/squid/access.log > cache_log /var/log/squid/cache.log cache_store_log > /var/log/squid/store.log > > #I used DNSAMSQ service for fast dns resolving > #so install by using "apt-get install dnsmasq" first dns_nameservers 127.0.0.1 101.11.11.5 ftp_user anonymous@ ftp_list_width 32 ftp_passive on ftp_sanitycheck on > > #ACL Section acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl > to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 # https, snews > acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl > Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews > acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl > Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port > 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port > 591 # filemaker acl Safe_ports port 777 # multiling http acl > Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl > Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method > CONNECT http_access allow manager localhost http_access deny manager > http_access allow purge localhost http_access deny purge http_access > deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow > localhost http_access allow all http_reply_access allow all icp_access > allow all > > #========================== > # Administrative Parameters > #========================== > > # I used UBUNTU so user is proxy, in FEDORA you may use use squid cache_effective_user proxy cache_effective_group proxy cache_mgr > [email protected] visible_hostname proxy.aacable.net unique_hostname > [email protected] > > #============= > # ACCELERATOR > #============= half_closed_clients off quick_abort_min 0 KB quick_abort_max 0 KB vary_ignore_expire on reload_into_ims on log_fqdn > off memory_pools off > > # If you want to hide your proxy machine from being detected at various site use following via off > > #============================================ > # OPTIONS WHICH AFFECT THE CACHE SIZE / zaib > #============================================ > # If you have 4GB memory in Squid box, we will use formula of 1/3 > # You can adjust it according to your need. IF squid is taking too much of RAM > # Then decrease it to 128 MB or even less. > > cache_mem 256 MB minimum_object_size 512 bytes maximum_object_size 500 > MB maximum_object_size_in_memory 128 KB > > #============================================================$ > # SNMP , if you want to generate graphs for SQUID via MRTG > #============================================================$ > #acl snmppublic snmp_community gl > #snmp_port 3401 > #snmp_access allow snmppublic all > #snmp_access allow all > > #============================================================ > # ZPH , To enable cache content to be delivered at full lan speed, > # To bypass the queue at MT. > #============================================================ tcp_outgoing_tos 0x30 all zph_mode tos zph_local 0x30 zph_parent 0 > zph_option 136 > > # Caching Youtube acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.com\/videoplayback \.youtube\.com\/videoplay > \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.[a-z][a-z]\/videoplayback \.youtube\.[a-z][a-z]\/videoplay > \.youtube\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i \.googlevideo\.com\/videoplayback \.googlevideo\.com\/videoplay \.googlevideo\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.com\/videoplayback \.google\.com\/videoplay > \.google\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.[a-z][a-z]\/videoplayback \.google\.[a-z][a-z]\/videoplay > \.google\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/ acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/ acl > videocache_allow_url url_regex -i > [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv acl > videocache_allow_url url_regex -i \.vimeo\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i > va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]? acl videocache_allow_url > url_regex -i \.youporn\.com\/(.*)\.flv acl videocache_allow_url > url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv acl > videocache_allow_url url_regex -i \.tube8\.com\/(.*)\.(flv|3gp) acl > videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv acl > videocache_allow_url url_regex -i > \.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i > \.apniisp\.com\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i \.break\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i redtube\.com\/(.*)\.flv acl > videocache_allow_dom dstdomain .mccont.com .metacafe.com > .cdn.dailymotion.com acl videocache_deny_dom dstdomain > .download.youporn.com .static.blip.tv acl dontrewrite url_regex > redbot\.org \.php acl getmethod method GET > > storeurl_access deny dontrewrite storeurl_access deny !getmethod > storeurl_access deny videocache_deny_dom storeurl_access allow > videocache_allow_url storeurl_access allow videocache_allow_dom > storeurl_access deny all > > storeurl_rewrite_program /etc/squid/storeurl.pl > storeurl_rewrite_children 7 storeurl_rewrite_concurrency 10 > > acl store_rewrite_list urlpath_regex -i > \/(get_video\?|videodownload\?|videoplayback.*id) acl > store_rewrite_list urlpath_regex -i \.flv$ \.mp3$ \.mp4$ \.swf$ \ > storeurl_access allow store_rewrite_list storeurl_access deny all > > refresh_pattern -i \.flv$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.mp3$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.mp4$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.swf$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.gif$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.jpg$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.jpeg$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.exe$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth > > # 1 year = 525600 mins, 1 month = 10080 mins, 1 day = 1440 refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern > (get_video\?|videoplayback\?id|videoplayback.*id|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern \.(ico|video-stats) > 10080 80% 10080 override-expire ignore-reload ignore-no-cache > ignore-private ignore-auth override-lastmod negative-ttl=10080 > refresh_pattern \.etology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern galleries\.video(\?|sz) 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern brazzers\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern \.adtology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern > ^.*(utm\.gif|ads\?|rmxads\.com|ad\.z5x\.net|bh\.contextweb\.com|bstats\.adbrite\.com|a1\.interclick\.com|ad\.trafficmp\.com|ads\.cubics\.com|ad\.xtendmedia\.com|\.googlesyndication\.com|advertising\.com|yieldmanager|game-advertising\.com|pixel\.quantserve\.com|adperium\.com|doubleclick\.net|adserving\.cpxinteractive\.com|syndication\.com|media.fastclick.net).* > 10080 20% 10080 ignore-no-cache ignore-private override-expire > ignore-reload ignore-auth negative-ttl=40320 max-stale=10 > refresh_pattern ^.*safebrowsing.*google 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth negative-ttl=10080 refresh_pattern > ^http://((cbk|mt|khm|mlt)[0-9]?)\.google\.co(m|\.uk) 10080 80% > 10080 override-expire ignore-reload ignore-private negative-ttl=10080 > refresh_pattern ytimg\.com.*\.jpg > 10080 80% 10080 override-expire ignore-reload refresh_pattern > images\.friendster\.com.*\.(png|gif) 10080 80% > 10080 override-expire ignore-reload refresh_pattern garena\.com > 10080 80% 10080 override-expire reload-into-ims refresh_pattern > photobucket.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 override-expire ignore-reload refresh_pattern > vid\.akm\.dailymotion\.com.*\.on2\? 10080 80% > 10080 ignore-no-cache override-expire override-lastmod refresh_pattern > mediafire.com\/images.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 reload-into-ims override-expire ignore-private refresh_pattern > ^http:\/\/images|pics|thumbs[0-9]\. 10080 80% > 10080 reload-into-ims ignore-no-cache ignore-reload override-expire > refresh_pattern ^http:\/\/www.onemanga.com.*\/ > 10080 80% 10080 reload-into-ims ignore-no-cache ignore-reload > override-expire refresh_pattern > ^http://v\.okezone\.com/get_video\/([a-zA-Z0-9]) 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth override-lastmod negative-ttl=10080 > > #images facebook refresh_pattern -i \.facebook.com.*\.(jpg|png|gif) 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern -i \.fbcdn.net.*\.(jpg|gif|png|swf|mp3) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern static\.ak\.fbcdn\.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern ^http:\/\/profile\.ak\.fbcdn.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > > #All File refresh_pattern -i \.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(rar|jar|gz|tgz|bz2|iso|m1v|m2(v|p)|mo(d|v)|arj|lha|lzh|zip|tar) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|swf|dat|ad|txt|dll) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(avi|ac4|mp(e?g|a|e|1|2|3|4)|mk(a|v)|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|vob) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(pp(t?x)|s|t)|pdf|rtf|wax|wm(a|v)|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|flv|x-flv) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims > > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern ^gopher: > 1440 0% 1440 refresh_pattern ^ftp: 10080 95% 10080 > override-lastmod reload-into-ims refresh_pattern . 1440 > 95% 10080 override-lastmod reload-into-ims

    Read the article

  • Trace flags - TF 1117

    - by Damian
    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wroclaw at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too. Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it. Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%: USE [master] GO CREATE DATABASE [TF1117]  ON  PRIMARY ( NAME = N'TF1117',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB ), ( NAME = N'TF1117_1',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB )  LOG ON ( NAME = N'TF1117_log',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,      SIZE = 1024KB ,      MAXSIZE = 2048GB ,      FILEGROWTH = 10% ) GO Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it. Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes: create table TF1117Tab (      col1 int,      col2 int,      col3 char (8000) ) go Now I load some data to the table to make sure that one of the data file must grow: declare @i int select @i = 1 while (@i < 800) begin       insert into TF1117Tab  values (@i, @i+1000, 'hello')        select @i= @i + 1 end I can check the actual file size in the sys.database_files DMV: SELECT name, (size*8)/1024 'Size in MB' FROM sys.database_files  GO   As you can see only the first data file was  expanded and the other has still the initial size:   name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              4 There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:     DECLARE @path NVARCHAR(260); SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),          CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc' FROM    sys.traces WHERE   is_default = 1; SELECT    DatabaseName,                 [FileName],                 SPID,                 Duration,                 StartTime,                 EndTime,                 FileType =                         CASE EventClass                                     WHEN 92 THEN 'Data'                                    WHEN 93 THEN 'Log'             END FROM sys.fn_trace_gettable(@path, DEFAULT) WHERE   EventClass IN (92,93) AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117' ORDER BY   StartTime DESC;   After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.    Now it’s time to turn on the flag 1117. DBCC TRACEON(1117)   I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded: name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              5 I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J  

    Read the article

  • How to load Xmir on Ubuntu Touch Saucy

    - by jaro123
    How to load Xmir on Ubuntu 13.10 Nexus 7 tablet, unity8 + lightdm + mir, unity-system-compositor + Xorg installed. Still getting message above after system reboot in the /var/log/Xorg.0.log : [ 81.542] (II) "glx" will be loaded by default. [ 81.542] (WW) "xmir" is not to be loaded by default. Skipping. [ 81.542] (II) LoadModule: "dri2" Thanks Note: No, this question is not duplicate with How to load Xmir on Ubuntu 13.10 because. My note that this do not work on Ubuntu Touch Saucy + Nexus7 tablet was removed by fossfreedom. I do feel it duplicate also, but this moderator issue. There is no chance to response on equal topic, even when it is not obviously solving the problem. I do consider this confusing and causing incomplete answers.

    Read the article

  • apache fails to connect to tomcat (Worker config?)

    - by techventure
    I have a tomcat 6 with follwoing server.xml: <Connector port="8253" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8445" acceptCount="100" debug="0" connectionTimeout="20000" disableUploadTimeout="true" /> <Connector port="8014" protocol="AJP/1.3" redirectPort="8445" /> and in added worker.properties: # Set properties for worker4 (ajp13) worker.worker4.type=ajp13 worker.worker4.host=localhost worker.worker4.port=8014 and i put in httpd.conf: JkMount /myWebApp/* worker4 It is not working a as trying to navigate to www1.myCompany.com/myWebApp gives "Service Temporarily Unavailable". I checked in tomcat catalina.out and it says: INFO: JK: ajp13 listening on /0.0.0.0:8014 UPDATE: i put mod_jk log level to debug and below is the result: [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_set_time_fmt::jk_util.c (458): Pre-processed log time stamp format is '[%a %b %d %H:%M:%S %Y] ' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_open::jk_uri_worker_map.c (770): rule map size is 8 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_add::jk_uri_worker_map.c (720): wildchar rule '/myWebApp/*=worker4' source 'JkMount' was added [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (171): uri map dump after map open: index=0 file='(null)' reject_unsafe=0 reload=60 modified=0 checked=0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 0: size=0 nosize=0 capacity=0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 1: size=8 nosize=0 capacity=8 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (186): NEXT (1) map #3: uri=/myWebApp/* worker=worker4 context=/myWebApp/* source=JkMount type=Wildchar len=6 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_set_time_fmt::jk_util.c (458): Pre-processed log time stamp format is '[%a %b %d %H:%M:%S %Y] ' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] init_jk::mod_jk.c (3123): Setting default connection pool max size to 1 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.list' with value 'worker1,worker2,worker3,worker4' to map. [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.worker4.type' with value 'ajp13' to map. [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.worker4.host' with value 'localhost' to map. [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.worker4.port' with value '8014' to map. [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_resolve_references::jk_map.c (774): Checking for references with prefix worker. with wildcard (recursion 1) [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_shm_calculate_size::jk_shm.c (132): shared memory will contain 4 ajp workers of size 256 and 0 lb workers of size 320 with 0 members of size 320+256 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [error] init_jk::mod_jk.c (3166): Initializing shm:/var/log/httpd/mod_jk.shm.9552 errno=13. Load balancing workers will not function properly. [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'ServerRoot' -> '/etc/httpd' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.list' -> 'worker1,worker2,worker3,worker4' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker1.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker1.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker1.port' -> '8009' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker2.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker2.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker2.port' -> '8010' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker3.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker3.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker3.port' -> '8112' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker4.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker4.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker4.port' -> '8014' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] build_worker_map::jk_worker.c (242): creating worker worker4 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] wc_create_worker::jk_worker.c (146): about to create instance worker4 of ajp13 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] wc_create_worker::jk_worker.c (159): about to validate and init worker4 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_validate::jk_ajp_common.c (2512): worker worker4 contact is 'localhost:8014' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2699): setting endpoint options: [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2702): keepalive: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2706): socket timeout: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2710): socket connect timeout: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2714): buffer size: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2718): pool timeout: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2722): ping timeout: 10000 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2726): connect timeout: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2730): reply timeout: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2734): prepost timeout: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2738): recovery options: 0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2742): retries: 2 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2746): max packet size: 8192 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_init::jk_ajp_common.c (2750): retry interval: 100 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] ajp_create_endpoint_cache::jk_ajp_common.c (2562): setting connection pool size to 1 with min 1 and acquire timeout 200 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker worker4 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] wc_get_name_for_type::jk_worker.c (293): Found worker type 'ajp13' [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_ext::jk_uri_worker_map.c (512): Checking extension for worker 3: worker4 of type ajp13 (2) [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (171): uri map dump after extension stripping: index=0 file='(null)' reject_unsafe=0 reload=60 modified=0 checked=0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 0: size=0 nosize=0 capacity=0 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 1: size=8 nosize=0 capacity=8 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (186): NEXT (1) map #3: uri=/myWebApp/* worker=worker4 context=/myWebApp/* source=JkMount type=Wildchar len=6 [Wed Jun 13 18:44:26 2012] [9552:3086317328] [debug] uri_worker_map_switch::jk_uri_worker_map.c (482): Switching uri worker map from index 0 to index 1 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_set_time_fmt::jk_util.c (458): Pre-processed log time stamp format is '[%a %b %d %H:%M:%S %Y] ' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_open::jk_uri_worker_map.c (770): rule map size is 8 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_add::jk_uri_worker_map.c (720): wildchar rule '/myWebApp/*=worker4' source 'JkMount' was added [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (171): uri map dump after map open: index=0 file='(null)' reject_unsafe=0 reload=60 modified=0 checked=0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 0: size=0 nosize=0 capacity=0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 1: size=8 nosize=0 capacity=8 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (186): NEXT (1) map #0: uri=/jsp-examples/* worker=worker1 context=/jsp-examples/* source=JkMount type=Wildchar len=15 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (186): NEXT (1) map #3: uri=/myWebApp/* worker=worker4 context=/myWebApp/* source=JkMount type=Wildchar len=6 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_set_time_fmt::jk_util.c (458): Pre-processed log time stamp format is '[%a %b %d %H:%M:%S %Y] ' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] init_jk::mod_jk.c (3123): Setting default connection pool max size to 1 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.list' with value 'worker1,worker2,worker3,worker4' to map. [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.worker4.type' with value 'ajp13' to map. [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.worker4.host' with value 'localhost' to map. [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_read_property::jk_map.c (491): Adding property 'worker.worker4.port' with value '8014' to map. [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_resolve_references::jk_map.c (774): Checking for references with prefix worker. with wildcard (recursion 1) [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_shm_calculate_size::jk_shm.c (132): shared memory will contain 4 ajp workers of size 256 and 0 lb workers of size 320 with 0 members of size 320+256 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [error] init_jk::mod_jk.c (3166): Initializing shm:/var/log/httpd/mod_jk.shm.9553 errno=13. Load balancing workers will not function properly. [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'ServerRoot' -> '/etc/httpd' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.list' -> 'worker1,worker2,worker3,worker4' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker1.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker1.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker1.port' -> '8009' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker2.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker2.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker2.port' -> '8010' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker3.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker3.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker3.port' -> '8112' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker4.type' -> 'ajp13' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker4.host' -> 'localhost' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] jk_map_dump::jk_map.c (589): Dump of map: 'worker.worker4.port' -> '8014' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] build_worker_map::jk_worker.c (242): creating worker worker4 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] wc_create_worker::jk_worker.c (146): about to create instance worker4 of ajp13 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] wc_create_worker::jk_worker.c (159): about to validate and init worker4 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_validate::jk_ajp_common.c (2512): worker worker4 contact is 'localhost:8014' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2699): setting endpoint options: [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2702): keepalive: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2706): socket timeout: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2710): socket connect timeout: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2714): buffer size: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2718): pool timeout: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2722): ping timeout: 10000 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2726): connect timeout: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2730): reply timeout: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2734): prepost timeout: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2738): recovery options: 0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2742): retries: 2 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2746): max packet size: 8192 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_init::jk_ajp_common.c (2750): retry interval: 100 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] ajp_create_endpoint_cache::jk_ajp_common.c (2562): setting connection pool size to 1 with min 1 and acquire timeout 200 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker worker4 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] wc_get_name_for_type::jk_worker.c (293): Found worker type 'ajp13' [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_ext::jk_uri_worker_map.c (512): Checking extension for worker 3: worker4 of type ajp13 (2) [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (171): uri map dump after extension stripping: index=0 file='(null)' reject_unsafe=0 reload=60 modified=0 checked=0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 0: size=0 nosize=0 capacity=0 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (176): generation 1: size=8 nosize=0 capacity=8 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_dump::jk_uri_worker_map.c (186): NEXT (1) map #3: uri=/myWebApp/* worker=worker4 context=/myWebApp/* source=JkMount type=Wildchar len=6 [Wed Jun 13 18:44:26 2012] [9553:3086317328] [debug] uri_worker_map_switch::jk_uri_worker_map.c (482): Switching uri worker map from index 0 to index 1 [Wed Jun 13 18:44:26 2012] [9555:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9556:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9557:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9558:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9559:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9560:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9561:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9562:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9563:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9564:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9565:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9567:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9568:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9566:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9569:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:26 2012] [9570:3086317328] [debug] jk_child_init::mod_jk.c (3068): Initialized mod_jk/1.2.28 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] map_uri_to_worker_ext::jk_uri_worker_map.c (1036): Attempting to map URI '/myWebApp/jsp/login.faces' from 8 maps [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/myWebApp/*=worker4' source 'JkMount' [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] find_match::jk_uri_worker_map.c (863): Found a wildchar match '/myWebApp/*=worker4' [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] jk_handler::mod_jk.c (2459): Into handler jakarta-servlet worker=worker4 r->proxyreq=0 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker worker4 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] wc_maintain::jk_worker.c (339): Maintaining worker worker1 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] wc_maintain::jk_worker.c (339): Maintaining worker worker2 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] wc_maintain::jk_worker.c (339): Maintaining worker worker3 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] wc_maintain::jk_worker.c (339): Maintaining worker worker4 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] wc_get_name_for_type::jk_worker.c (293): Found worker type 'ajp13' [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] init_ws_service::mod_jk.c (977): Service protocol=HTTP/1.1 method=GET ssl=false host=(null) addr=167.184.214.6 name=www1.myCompany.com.au port=80 auth=(null) user=(null) laddr=10.215.222.78 raddr=167.184.214.6 uri=/myWebApp/jsp/login.faces [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_get_endpoint::jk_ajp_common.c (2977): acquired connection pool slot=0 after 0 retries [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_service::jk_ajp_common.c (2283): processing worker4 with 2 retries [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_send_request::jk_ajp_common.c (1501): (worker4) all endpoints are disconnected. [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] jk_open_socket::jk_connect.c (452): socket TCP_NODELAY set to On [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] jk_open_socket::jk_connect.c (576): trying to connect socket 18 to 127.0.0.1:8014 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8014 failed (errno=13) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8014) (errno=13) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [error] ajp_send_request::jk_ajp_common.c (1507): (worker4) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=13) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] ajp_service::jk_ajp_common.c (2447): (worker4) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_service::jk_ajp_common.c (2304): retry 1, sleeping for 100 ms before retrying [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_send_request::jk_ajp_common.c (1501): (worker4) all endpoints are disconnected. [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] jk_open_socket::jk_connect.c (452): socket TCP_NODELAY set to On [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] jk_open_socket::jk_connect.c (576): trying to connect socket 18 to 127.0.0.1:8014 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8014 failed (errno=13) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8014) (errno=13) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [error] ajp_send_request::jk_ajp_common.c (1507): (worker4) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=13) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] ajp_service::jk_ajp_common.c (2447): (worker4) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [error] ajp_service::jk_ajp_common.c (2466): (worker4) connecting to tomcat failed. [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_reset_endpoint::jk_ajp_common.c (743): (worker4) resetting endpoint with sd = 4294967295 (socket shutdown) [Wed Jun 13 18:44:54 2012] [9555:3086317328] [debug] ajp_done::jk_ajp_common.c (2905): recycling connection pool slot=0 for worker worker4 [Wed Jun 13 18:44:54 2012] [9555:3086317328] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=worker4 The error i get in browser is: Service Temporarily Unavailable Apache/2.2.3 (Red Hat) Server at www1.myCompany.com.au Port 80 can someone please help and explain what is going on and how it can be resolved?

    Read the article

  • Centos 6.3 vsftp unable to upload file to apache webserver

    - by user148648
    I am new to Centos, I did work with Sun Solaris and upload files to Apache web server before. I create an end user account and manage to ftp using command prompt to the server, error message is '226 Transfer Done (but failed to open directory). Content of my vsftpd.conf as below # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # ** may need to comment it back # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 local_umask=077 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # *** maybe to comment it back!!! # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # ** may need to comment it back!!! # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Warning, only for authorize login. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list local_root=/var/www # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • Script to setup Ubuntu as a wireless access point on a bridge mode

    - by nixnotwin
    I use the following script to make my netbook a full-fledged wireless access point. It creates a bridge with eth0 and wlan0 and starts hostapd. #!/bin/bash service network-manager stop ifconfig eth0 0.0.0.0 #remove IP from eth0 ifconfig eth0 up #ensure the interface is up ifconfig wlan0 0.0.0.0 #remove IP from eth1 ifconfig wlan0 up #ensure the interface is up brctl addbr br0 #create br0 node hostapd -d /etc/hostapd/hostapd.conf > /var/log/hostapd.log & sleep 5 brctl addif br0 eth0 #add eth0 to bridge br0 brctl addif br0 wlan0 #add wlan0 to bridge br0 ifconfig br0 192.168.1.15 netmask 255.255.255.0 #ip for bridge ifconfig br0 up #bring up interface route add default gw 192.168.1.1 # gateway This script works efficiently. But if I want to revert back to use Network Manager, I cannot do it. The bridge simply cannot be deleted. How can I modify this script so that if I run bridge_script --stop, the bridge gets deleted, network manager starts and interfaces behave as if the machine had a fresh reboot.

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • How to install older ruby version on ubuntu Precise Pangolin?

    - by Codestudent
    I got a present: older Rails tutorials that needs old ruby version. I try to install ruby-1.8 with the packet manager. I still got problems with the tutorial example code. Next I try rvm to install the old ruby version. Unfortunately I got an error. I do not know what to do. I search the internet. Many people got no problems with rvm. rvm use *ERROR: Branch origin/ruby_1_8_4 not found.* and *ERROR: Error running 'GEM_PATH="/usr/share/ruby-rvm/gems/ruby-1.8.4- tv1_8_4:/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4@global:/usr/share/ruby- rvm/gems/ruby-1.8.4-tv1_8_4:/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4@global" GEM_HOME="/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4" "/usr/share/ruby- rvm/rubies/ruby-1.8.4-tv1_8_4/bin/ruby" "/usr/share/ruby-rvm/src/rubygems- 1.3.7/setup.rb"', please read /usr/share/ruby-rvm/log/ruby-1.8.4- tv1_8_4/rubygems.install.log* Please give me a hint.

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • Rails noob - How to work on data stored in models

    - by Raghav Kanwal
    I'm a beginner to Ruby and Rails, and I have made a couple applications like a Microposts clone and a Todo-List for starters, but I'm starting work on another project. I've got 2 models - user and tracker, you log in via the username which is authenticated and you can log down data which is stored in the tracker table. The tracker has a column named "Calories" and I would like Rails to sum all of the values entered if they are on the same date, and output the result which is subtracted from, say 3000 in a new statement after the display of the model. I know what I'm talking about is just ruby code, im just not sure how to incorporate it. :( Could someone please guide me through this? And also link me to some guides/tutorials which teach working on data from models? Thank you :)

    Read the article

  • Announcing MySQL Enterprise Backup 3.7.1

    - by Hema Sridharan
    The MySQL Enterprise Backup (MEB) Team is pleased to announce the release of MEB 3.7.1, a maintenance release version that includes bug fixes and enhancements to some of the existing features. The most important feature introduced in this release is Automatic Incremental Backup. The new  argument syntax for the --incremental-base option is introduced which makes it simpler to perform automatic incremental backups. When the options --incremental & --incremental-base=history:last_backup are combined, the mysqlbackup command  uses the metadata in the mysql.backup_history table to determine the LSN to use as the lower limit of the incremental backup. You no longer need to keep track of the actual LSN (as in the option --start-lsn=LSN) or even the location of the previous backup (as in the option --incremental-base=dir:directory_path)This release also incudes various bug fixes related to some options used in MEB. The most important are few of them as listed below,1. The option --force now allows overwriting InnoDB data and log files in  combination with the apply-log and apply-incremental-backup options, and replacing the image file in combination with the backup-to-image and backup-dir-to-image options. 2. Resolved a bug that prevented MEB to interface with third-party storage managers to execute backup and restore jobs in combination with the SBT interface and associated --sbt* options for mysqlbackup. 3. When MEB is run with the copy-back option,  it now displays warnings as existing files are overwritten.For more information about other bug fixes, please refer to the change-log in http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/meb-news.html The complete MEB documentation is located at http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/index.html. You will find the binaries for the new release in My Oracle Support,  https://support.oracle.comChoose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. If you haven't looked at MEB 3.7.1 recently, please do so now and let us know how MEB works for you. Send your feedback to [email protected].

    Read the article

  • Debugging "Premature end of script headers" - WSGI/Django [migrated]

    - by Marcin
    I have recently deployed an app to a shared host (webfaction), and for no apparent reason, my site will not load at all (it worked until today). It is a django app, but the django.log is not even created; the only clue is that in one of the logs, I get the error message: "Premature end of script headers", identifying my wsgi file as the source. I've tried to add logging to my wsgi file, but I can't find any log created for it. Is there any recommended way to debug this error? I am on the point of tearing my hair out. My WSGI file: import os import sys from django.core.handlers.wsgi import WSGIHandler import logging logger = logging.getLogger(__name__) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['CELERY_LOADER'] = 'django' virtenv = os.path.expanduser("~/webapps/django/oneclickcosvirt/") activate_this = virtenv + "bin/activate_this.py" execfile(activate_this, dict(__file__=activate_this)) # if 'VIRTUAL_ENV' not in os.environ: # os.environ['VIRTUAL_ENV'] = virtenv sys.path.append(os.path.dirname(virtenv+'oneclickcos/')) logger.debug('About to run WSGIHandler') try: application = WSGIHandler() except (Exception,), e: logger.debug('Exception starting wsgihandler: %s' % e) raise e

    Read the article

  • Varnish 3.0.2 to Apache2 sometimes return error 503

    - by Ronnie Jespersen
    Hey guys I hope you can help me out here. I have an Ngingx parsing http and https to a varnish cache(3.0.2). From the varnish it is sent to apache2. Now I have for some time been tracking some strange 503 errors. But I cant seem to find the silver bullet. Currently I am logging the 503 errors through varnish this way: sudo varnishlog -c -m TxStatus:503 >> /home/rj/varnishlog503.log and then referring to the apache access log to see if any 503 requests have been handled. Today I had a health check from the firewall that failed: 20 SessionOpen c 127.0.0.1 34319 :8081 20 ReqStart c 127.0.0.1 34319 607335635 20 RxRequest c HEAD 20 RxURL c /health-check 20 RxProtocol c HTTP/1.0 20 RxHeader c X-Real-IP: 192.168.3.254 20 RxHeader c Host: 192.168.3.189 20 RxHeader c X-Forwarded-For: 192.168.3.254 20 RxHeader c Connection: close 20 RxHeader c User-Agent: Astaro Service Monitor 0.9 20 RxHeader c Accept: */* 20 VCL_call c recv lookup 20 VCL_call c hash 20 Hash c /health-check 20 VCL_return c hash 20 VCL_call c miss fetch 20 Backend c 33 aurum aurum 20 FetchError c http first read error: -1 11 (No error recorded) 20 VCL_call c error deliver 20 VCL_call c deliver deliver 20 TxProtocol c HTTP/1.1 20 TxStatus c 503 20 TxResponse c Service Unavailable 20 TxHeader c Server: Varnish 20 TxHeader c Content-Type: text/html; charset=utf-8 20 TxHeader c Retry-After: 5 20 TxHeader c Content-Length: 879 20 TxHeader c Accept-Ranges: bytes 20 TxHeader c Date: Wed, 06 Jun 2012 12:35:12 GMT 20 TxHeader c X-Varnish: 607335635 20 TxHeader c Age: 60 20 TxHeader c Via: 1.1 varnish 20 TxHeader c Connection: close 20 Length c 879 20 ReqEnd c 607335635 1338986052.649786949 1338986112.648169994 0.000160217 59.997980356 0.000402689 Now the backend server (apache) does not have any 503 error in the access log at this point. So I am confused. Is this varnish throwing a 503 because it thinks apache is to slow? There is a lot traffic coming through at this point so I know the server is up and running. I do have other 503 error codes with posts and gets so there is really no pattern. It seems to be at random times and random requests. Even in the morning when the server dosen't seem to be doing anything. I do see another pattern in the log: 4 VCL_call c recv pass 4 VCL_call c hash 4 Hash c /?id=412 4 VCL_return c hash 4 VCL_call c pass pass 4 FetchError c no backend connection 4 VCL_call c error deliver 4 VCL_call c deliver deliver Here fetcherror says "no backend connection". A summery of the FetchErrors in todays log: 16 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 16 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 22 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 21 FetchError c http first read error: -1 11 (No error recorded) 26 FetchError c no backend connection 4 FetchError c no backend connection 20 FetchError c http first read error: -1 11 (No error recorded) 39 FetchError c http first read error: -1 11 (No error recorded) I haven't changed the default timeout values for varnish. This is my configuration for one of the backend servers. backend xenon { .host = "192.168.3.187"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } I'm running prefork module on apache2 with this configuration <IfModule mpm_prefork_module> StartServers 1 MinSpareServers 2 MaxSpareServers 5 MaxClients 200 MaxRequestsPerChild 75 </IfModule> and only PHP files is sent to the server. Every other static file is handled by Nginx. Any ideas? ------- EDIT -------------- Some more debuging information I have run a varnishadm debug.health Backend radon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002560 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend xenon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002760 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend iridium is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.000849 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend aurum is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002100 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy And I have been monitoring varnishstat from the two load balancers 3224774 3.99 2.61 backend_conn - Backend conn. success 27 0.00 0.00 backend_unhealthy - Backend conn. not attempted 63 0.00 0.00 backend_fail - Backend conn. failures 358798 0.00 0.29 backend_reuse - Backend conn. reuses 21035 0.00 0.02 backend_toolate - Backend conn. was closed 379834 0.00 0.31 backend_recycle - Backend conn. recycles 26 0.00 0.00 backend_retry - Backend conn. retry 3217751 5.99 2.61 backend_conn - Backend conn. success 32 0.00 0.00 backend_fail - Backend conn. failures 364185 0.00 0.30 backend_reuse - Backend conn. reuses 27077 0.00 0.02 backend_toolate - Backend conn. was closed 391263 0.00 0.32 backend_recycle - Backend conn. recycles 36 0.00 0.00 backend_retry - Backend conn. retry Notice that none of them have reported backend_fail. /Ronnie

    Read the article

  • Issues when upgrading gnome-session

    - by Gabriel A. Zorrilla
    I'm having issues since yesterday after an upgrade. GNOME session got messed up somehow and now i cannot install further upgrades nor new apps. Here is what i got from terminal after doing the recommended apt-get install -f; (Reading database ... 166876 files and directories currently installed.) Preparing to replace gnome-session 3.0.1-0ubuntu1~build2 (using .../gnome-session_3.0.2-0ubuntu3~natty1_all.deb) ... Unpacking replacement gnome-session ... dpkg: error processing /var/cache/apt/archives/gnome-session_3.0.2-0ubuntu3~natty1_all.deb (--unpack): trying to overwrite '/usr/share/xsessions/gnome-shell.desktop', which is also in package gnome-shell 3.0.1-0ubuntu1~build1 Errors were encountered while processing: /var/cache/apt/archives/gnome-session_3.0.2-0ubuntu3~natty1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) In var/log/apt/history.log, it says: Start-Date: 2011-05-29 17:56:16 Commandline: apt-get -f install Upgrade: gnome-session:amd64 (3.0.1-0ubuntu1~build2, 3.0.2-0ubuntu3~natty1) Error: Sub-process /usr/bin/dpkg returned an error code (1) End-Date: 2011-05-29 17:56:21 No idea what all this means.

    Read the article

  • Still no keyboard after uninstalling Ubuntu

    - by Muhammad Rushdi Ibrahim
    I installed Ubuntu 11.04 for the first time yesterday. After rebooting for the first time, I couldn't log in because I couldn't type anything using the keyboard. After rebooting, the keyboard failed completely; I can only automatically boot into Windows since I can't choose Ubuntu. Then the problem got worse. I had to use On-Screen keyboard to log in into Windows. Still no keyboard. When I rebooted, my laptop couldn't reboot at all! I had to hard reboot. I decided to uninstall the Ubuntu, using the Add/Remove program in the Control Panel. I uninstalled it successfully. My laptop automatically boots into Windows without Ubuntu option. However, I still don't have the keyboard! Please help me. Acer Aspire 4935 Windows 7 Ultimate Thanks.

    Read the article

  • Ubuntu One using 500 MB memory also when idle

    - by cdysthe
    I'm a Dropbox convert (I hope!), but after having used Ubuntu One for a couple of weeks I notice a few differences from Dropbox. The most glaring difference is that the sync daemon constantly takes 500MB ram on my system (Ubuntu 12.04 x64). It hogs this amount of memory as soon as I log in, does it's initial sync/check but keeps the memory. All in all it seems to me that Ubuntu One uses more system resources than Dropbox. I am syncing the same folders and files with Ubuntu One as I was with Dropbox. Also, afte I log in Ubuntu One grids at 100% CPU for at least five minutes which can be annoying on a laptop, but is not a showstopper. I'm wondering if this is a problem on my system, or if Ubuntu One is expected to use that amount of memory even when idle?

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart?

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • How To Disable the Charms Bar and Switcher Hot Corners in Windows 8

    - by Chris Hoffman
    Several programs can prevent the app switcher and charms from appearing when you move your mouse to the corners of the screen in Windows 8, but you can do it yourself with this quick registry hack. You can also hide the charms bar and switcher by installing an application like Classic Shell, which will also add a Start menu and let you log directly into the desktop. What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Apache 2.4 PHP 5.4 mysql not found?

    - by jurchiks
    Just tried setting up the latest Apache (2.4.1 x86 VC10 from apachelounge) and PHP (5.4 VC9 TS X86, with the php5apache2_4.dll) on my PC and ran into this weird problem; in my php.ini I have enabled all of the following: extension=php_mysql.dll extension=php_mysqli.dll extension=php_pdo_mysql.dll but when I try to print the available PDO's using this: print_r(PDO::getAvailableDrivers()); It gives me an empty array. PhpMyAdmin and MantisBT also refuse to work, saying that the mysql extension is missing. phpinfo() gives this: PDO PDO support enabled PDO drivers no value and when searching for mysql, only the mysqlnd section pops up... The DLLs are there in the D:/php/ext folder and no errors pop up when starting up Apache, as well as no errors in the Windows Event Log, and PHP itself has no error log anyway. What could possibly be the problem with this? Before this I had Apache 2.2.22 from apachelounge and PHP 5.3.9 and I had no problems. Now nothing works...

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >