Search Results

Search found 26523 results on 1061 pages for 'jack back'.

Page 22/1061 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Apache Front End....Tomcat back end...SSL question

    - by Jared
    Hi Everyone, Question.... I have Apache setup as my webserver. Tomcat is hooked into Apache via mod_jk, so the user never interacts with Tomcat. I have set up SSL on the Apache Webser...I can hit it with https:// localhost When I try to access my application at ...https://localhost/app I get a directory not found error. Catch is when I go regular http... I can hit it fine... http:// localhost/app What do I have to edit for this connection to work? I have uncommented the AJP connector in server.xml I have added my virtual host to httpd.conf What am I missing? Thanks in advance. Jared

    Read the article

  • Can't connect to smtp (postfix, dovecot) after making a change and trying to change it back

    - by UberBrainChild
    I am using postfix and dovecot along with zpanel and I tried enabling SSL and then turned it off as I did not have SSL configured yet and I realized it was a bit stupid at the time. I am using CentOS 6.4. I get the following error in the mail log. (I changed my host name to "myhostname" and my domain to "mydomain.com") Oct 20 01:49:06 myhostname postfix/smtpd[4714]: connect from mydomain.com[127.0.0.1] Oct 20 01:49:16 myhostname postfix/smtpd[4714]: fatal: no SASL authentication mechanisms Oct 20 01:49:17 myhostname postfix/master[4708]: warning: process /usr/libexec/postfix/smtpd pid 4714 exit status 1 Oct 20 01:49:17 amyhostname postfix/master[4708]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling Reading on forums and similar questions I figured it was just a service that was not running or installed. However I can see that saslauthd is currently up and running on my system and restarting it does not help. Here is my postfix master.cf # # Postfix master process configuration file. For details on the format # of the file, see the Postfix master(5) manual page. # # ***** Unused items removed ***** # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd # -o content_filter=smtp-amavis:127.0.0.1:10024 # -o receive_override_options=no_address_mappings pickup fifo n - n 60 1 pickup submission inet n - - - - smtpd -o content_filter= -o receive_override_options=no_header_body_checks cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap smtp unix - - n - - smtp smtps inet n - - - - smtpd # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # ==================================================================== maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=foo argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient # # spam/virus section # smtp-amavis unix - - y - 2 smtp -o smtp_data_done_timeout=1200 -o disable_dns_lookups=yes -o smtp_send_xforward_command=yes 127.0.0.1:10025 inet n - y - - smtpd -o content_filter= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks=127.0.0.0/8 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o receive_override_options=no_header_body_checks -o smtpd_bind_address=127.0.0.1 -o smtpd_helo_required=no -o smtpd_client_restrictions= -o smtpd_restriction_classes= -o disable_vrfy_command=no -o strict_rfc821_envelopes=yes # # Dovecot LDA dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} # # Vacation mail vacation unix - n n - - pipe flags=Rq user=vacation argv=/var/spool/vacation/vacation.pl -f ${sender} -- ${recipient} And here is dovecot ## ## Dovecot config file ## listen = * disable_plaintext_auth = no protocols = imap pop3 lmtp sieve auth_mechanisms = plain login passdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } userdb { driver = sql } userdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } mail_location = maildir:/var/zpanel/vmail/%d/%n first_valid_uid = 101 #last_valid_uid = 0 first_valid_gid = 12 #last_valid_gid = 0 #mail_plugins = mailbox_idle_check_interval = 30 secs maildir_copy_with_hardlinks = yes service imap-login { inet_listener imap { port = 143 } } service pop3-login { inet_listener pop3 { port = 110 } } service lmtp { unix_listener lmtp { #mode = 0666 } } service imap { vsz_limit = 256M } service pop3 { } service auth { unix_listener auth-userdb { mode = 0666 user = vmail group = mail } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } } service auth-worker { } service dict { unix_listener dict { mode = 0666 user = vmail group = mail } } service managesieve-login { inet_listener sieve { port = 4190 } service_count = 1 process_min_avail = 0 vsz_limit = 64M } service managesieve { } lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes protocol lda { mail_plugins = quota sieve postmaster_address = [email protected] } protocol imap { mail_plugins = quota imap_quota trash imap_client_workarounds = delay-newmail } lmtp_save_to_detail_mailbox = yes protocol lmtp { mail_plugins = quota sieve } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol sieve { managesieve_max_line_length = 65536 managesieve_implementation_string = Dovecot Pigeonhole managesieve_max_compile_errors = 5 } dict { quotadict = mysql:/etc/zpanel/configs/dovecot2/dovecot-dict-quota.conf } plugin { # quota = dict:User quota::proxy::quotadict quota = maildir:User quota acl = vfile:/etc/dovecot/acls trash = /etc/zpanel/configs/dovecot2/dovecot-trash.conf sieve_global_path = /var/zpanel/sieve/globalfilter.sieve sieve = ~/dovecot.sieve sieve_dir = ~/sieve sieve_global_dir = /var/zpanel/sieve/ #sieve_extensions = +notify +imapflags sieve_max_script_size = 1M #sieve_max_actions = 32 #sieve_max_redirects = 4 } log_path = /var/log/dovecot.log info_log_path = /var/log/dovecot-info.log debug_log_path = /var/log/dovecot-debug.log mail_debug=yes ssl = no Does anyone have any ideas or tips on what I can try to get this working? Thanks for all the help EDIT: Output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 delay_warning_time = 4 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = localhost.$mydomain, localhost mydomain = control.yourdomain.com myhostname = control.yourdomain.com mynetworks = all newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.2.2/README_FILES recipient_delimiter = + relay_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-relay_domains_maps.cf sample_directory = /usr/share/doc/postfix-2.2.2/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_use_tls = no smtpd_client_restrictions = smtpd_data_restrictions = reject_unauth_pipelining smtpd_helo_required = yes smtpd_helo_restrictions = smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_recipient_domain smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = smtpd_use_tls = no soft_bounce = yes transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_alias_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_alias_maps.cf, regexp:/etc/zpanel/configs/postfix/virtual_regexp virtual_gid_maps = static:12 virtual_mailbox_base = /var/zpanel/vmail virtual_mailbox_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_mailbox_maps.cf virtual_minimum_uid = 101 virtual_transport = dovecot virtual_uid_maps = static:101

    Read the article

  • How to reverse-i-search back and forth?

    - by m-ric
    I use reverse-i-search often, and that's cool. Sometime though when pressing Ctrl+r multiple times, I pass the command I am actually looking for. Because Ctrl+r searches backward in history, from newest to oldest, I have to: cancel, search again and stop exactly at the command, without passing it. While in reverse-i-search prompt, is it possible to search forward, i.e. from where I stand to newest. I naively tried Ctrl+shift+r, no luck. I heard about Ctrl+g but this is not what I am expecting here. Anyone has an idea?

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • OS X wants to back up Macintosh HD to itself

    - by slhck
    From time to time, OS X (10.6) will ask me if I want to perform a backup of my Macintosh HD, but the backup destination is obviously the HD itself. I have not plugged in any external device, nor is there a Time Capsule network share. A message like this will appear: Moreover, the Macintosh HD will suddenly have a Time Machine icon in Finder. Has somebody experienced the same and could help me identify how to stop OS X from using its own disk as the backup destination?

    Read the article

  • back-end SQL server 2005 databases for website

    - by Datapimp23
    Hi, We're migrating an existing IIS website + MS SQL 2005 database (on the same server) to a new test set-up. The existing set-up is too slow. I want one ISS server and 2 X MS SQL server 2005. One live DB server for the website queries (inserts, updates) and another for backups, reports or stored procedures. So the live DB should be more aimed at performance. The other doesn't even need to be synced instantly. What is the best way in SQL server 2005 to set this up. Can somebody point me in the right direction and give me some pointers. Thanks

    Read the article

  • Any "Magic Tricks" For Getting Data Back After Windows 7 Install

    - by user163757
    My old man installed Windows 7 without making a proper backup, and now realizes he left behind some important data. He did a true "clean install", so there is no Windows.old folder in the root directory. However, I believe the format performed on the hard drive was only a quick format, so I am hoping there is some chance at data recovery. I took his hard drive out, and have spent a majority of the weekend researching data recovery options. I paid $70 for the GetDataBack software, but have had little success with it. I can see all of the files I want to restore, however they appear corrupt when I try to open them. With that all being said, does anyone know of a viable way to recover some of this data, or is it a lost cause all together?

    Read the article

  • nginx: try_files not finding static files, falling back to PHP

    - by Wells Oliver
    Relevant configuration: location /myapp { root /home/me/myapp/www; try_files $uri $uri/ /myapp/index.php?url=$uri&$args; location ~ \.php { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } } I absolutely have a file foo.html in /home/me/myapp/www but when I browse to /myapp/foo.html it is handled by PHP, the final fallback in the try_files list. Why is this happening?

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • Prevent master to fall back to master after failure

    - by Chrille
    I'm using keepalived to setup a virtual ip that points to a master server. When a failover happens it should point the virtual ip to the backup, and the IP should stay there until I manually enable (fix) the master. The reason this is important is that I'm running mysql replication on the servers and writes should only be on the master. When I failover I promote the slave to master. The master server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP1 } vrrp_instance APP1 { interface eth0 state EQUAL virtual_router_id 61 priority 999 nopreempt virtual_ipaddress { 217.x.x.129 } smtp_alert } Backup server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP2 } vrrp_instance APP2 { interface eth0 state EQUAL virtual_router_id 61 priority 100 virtual_ipaddress { 217.xx.xx.129 } notify_master "/etc/keepalived/notify.sh del app2" notify_backup "/etc/keepalived/notify.sh add app2" notify_fault "/etc/keepalived/notify.sh add app2” smtp_alert }

    Read the article

  • Rolling Back Microsoft CRM during testing

    - by npeterson
    Process related question: Currently we have a multi-tenant installation of MS CRM 4.0 on three servers, Dev, Test, and Live. We are actively working on customizing one of the tenants, but the others are static. During user testing, we often find it necessary to 'start fresh' in one of the tenants. Is it better to try and delete out the changes from the tenant (created accounts, leads, etc), or just revert the database to a backup from before the testing started? Is there compelling reasons why bulk delete is not advisable for MSCRM or that reverting the database frequently could cause issue?

    Read the article

  • How restore back up email files in qmail

    - by Maysam
    I have problem with restoring some old backup mail files in a mail server that uses qmail. The problem is, when I copy a new email file to the /cur directory, the number of emails in front of inbox increases, but when I click on the inbox, I don't see the newly copied email. I can only see the old emails. I also deleted maildirsize and courierimapuiddb files and they where automatically created again, but it didn't help and I cannot still see the email in my inbox. Is there something I am missing? How can I restore the backed up email files? Please note that when I copy the email files in /.sent-mail/cur directory, they are all displayed in my sent box, but that doesn't happen for inbox files in /cur directory.

    Read the article

  • Windows 7 can't make back up

    - by J. Pablo Fernández
    I've been trying to get Windows 7 to make a backup for a week or so. I'm backing up to a local NAS and I'm getting this error: Windows Backup: Troubleshooting Options Check your backup Windows Backup could not create a zip file. This could be because the drive that Windows is installed on does not have enough space or it could be a temporary error. Make sure you have at least 400 MB of free space and try again. Backup time: 2009-09-07 14:48 Backup location: \VANGELIS\Shared\Backup\lennon\ Error code: 0x81000015 My local hard disk has 290GB free and my NAS has 200GB free. Any ideas what might be wrong?

    Read the article

  • Easiest Driver Back Up solution ?

    - by mgpyone
    I've to faced kinda of missing drivers after formatted Windows .. Especially, Graphic Drivers, Touch-pad driver, finger print driver and so forth are sensitive . They do not work without installing drivers. I can even download at its official website . But some drivers are not found in the website. .. Thus, I'm looking for a solution ( may be a software ) that can backup my drivers effectively ( if it can do with schedule, it'll be great) and can restore easily locally . Espcially, for Win XP and Vista. Any suggestions will be appreciated really.

    Read the article

  • VMWare Server lck file keeps coming back

    - by muncherelli
    I am running VMWare Server 2.0 on a Debian Lenny system as a host OS. I am getting this error when I try to start a Virtual Machine Cannot open the disk '/var/lib/vmware/Virtual Machines//.vmdk' or one of the snapshot disks it depends on. Reason: Failed to lock the file. So I looked around on the web and found that I need to delete the .lck folder and file in order to get this error This seems to happen any time I reboot my Debian Server. The Virtual Machines sometimes do not recover and this lck file is causing problems. Should I create a cron script that does a rm *.lck on each of my machines on reboot? Looking for any direction on how to resolve this. It seems when i do a "reboot" command it is maybe not gracefully shutting down the VMware containers so the lock files are still intact?

    Read the article

  • Unidirectional synchronization and admin back-end

    - by HTF
    I have Wordpress installation on two web nodes (load balancing/failover). There is unidirectional synchronization from server A to server B so any updates must occur on the first web node. I have a problem with Wordpress admin side. I'm using Nginx and the initial plan was to create rewrite rule from domain.com/wp-admin to wpadmin.domain.com - pointing to the first node. The problem is that the Wordpress installation can be access only via main domain and without extra subdomain there is no distinction between both web servers for the rewrite rule. Could you please advise if there is any other solution in this case. Regards

    Read the article

  • Internet Explorer 8/9 javascript disable, need to enable back

    - by Shiro
    Dear all, This is not a programming issue, just want to ask any one having Internet Explorer Javascript disabled issue. I try go to Internet Option enable everything, including javascript restart it. But all the javascript still disabled. I go to jQuery API website, test the javascript, at the Demo part nothing show. Even I write a alert("hello"); in a html file, nothing come out. I am not sure what happen to my Internet Explorer. Initially I am using IE8, I try to update to IE9Beta, IE9RC, the problem start from IE8-IE9RC still haven't resolved. I am not sure what application crash with the IE. Please advice! It is Urgent. Thanks!

    Read the article

  • mod_rpaf problems with Nginx front, Apache back-end after Ubuntu upgrade

    - by Kenn
    I'm running an Nginx front-end for static files, and proxying to an Apache backend for PHP and Passenger, using Apache's mod_rpaf to set the correct remote IP address on the backend. Everything worked fine until I upgraded to Ubuntu 12.04 (Precise). Now Apache reports all connections coming from 127.0.0.1. Here's the relevant configuration. Nothing here changed with the upgrade. Nginx: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; mod_rpaf: <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 ::1 RPAFheader X-Forwarded-For </IfModule> I'm using %{X-Forwarded-For}i in my Apache LogFormat directive and the access logs are showing the correct remote address, so I know Nginx is passing the address along properly. In a phpinfo() test, HTTP_X_FORWARDED_FOR is showing the correct remote address, but REMOTE_ADDR is 127.0.0.1. This is reflected in PHP applications as well, such as WordPress comments. I've tried switching Nginx and mod_rpaf to X-Real-IP with no effect. Did something change that I missed? Relevant version info, everything installed from the Ubuntu repository: Nginx 1.1.19 Apache 2.2.22 mod_rpaf 0.6

    Read the article

  • way to back up sql server 2005 database to just get the data and objects

    - by Caveatrob
    I'm trying to do a nightly backup of a SQL Server 2005 database to get the leanest, smallest file that includes all the table data, stored procedures, views, and functions. I'm downloading and saving this daily in case the server isn't available and I need to just get some data out. I don't care about the logs or any other parts of the DB but the data, procs, tables, and functions. What's the easiest way to get just this information backed up?

    Read the article

  • Can't connect back to the wireless network after the password was changed

    - by 7777
    Family changed the network password and some other network settings after new computers were brought into the house because apparently they wouldn't work with what we had. Actually an off-site tech remotely changed it, and I have no idea what he did. My laptop detects the network (it shows up under available networks) but whenever I try to connect it says: Windows is unable to connect to the selected network. The network may no longer be in range. Please refresh the list of available networks, and try to connect again. I wish I could give more details, config settings, but frankly I have no idea what I'm looking for. This is XP (also, not a password issue, I know the password, it's just that I have no idea where to enter it, etc.)

    Read the article

  • First request too slow even if I have a load balancer in the back

    - by adrian7
    I have an Apache 2 on Centos + bind with a wordpress website on it (e.g example.com). I have also set up, on another server in a different contry a load balancer (varnish:80 + nginx 127.0.0.1:8080) for it - which task is to server all static content under /wp-content/. Using Simple DNS editor I added an A entry to cdn.example.com pointing to the server's IP. So no extra work from a 2nd dns server. Then using htaccess I redirect all requests to jpg|gif|css|js files to cdn.example.com. That works and all files are saved on the "cdn" server and served right away. My problem is that for the first time I enter on example.com (e.g after restarting the computer or closing the browser) the load time is 1 up to 3 seconds, while any subsequent page loads take only 300 to 600 miliseconds. I know it might be a DNS issue, but I have done a cache check on several websites and cdn.example.com indicates the right IP. Do you have any ideas where I should dig to solve this first-time slowness?

    Read the article

  • Rolling back Server edition from GRUB List (Ubuntu)

    - by A.Rashad
    I seems i did some mistake, and installed Ubuntu server on my home based ubuntu box. and the problem is that GRUB places the server edition as a priority boot in the menu, especially when I upgrade the kernel or the release. How can I remove the server from the GRUB menu permanently? can I un-install Ubuntu server edition?

    Read the article

  • How to downgrade a certain ubuntu 10.04 package (php) back to karmic

    - by Eugene
    Hi! I've updated from 9.10 to 10.04 but unfortunately the PHP provided with 10.04 is not yet supported by zend optimizer. As far as I understand I need to somehow replace the PHP 5.3 package provided under 10.04 with an older PHP 5.2 package provided under 9.10. However I am not sure whether this is the right way to downgrade PHP and if yes, I don't know how to replace the 10.04 package with 9.10 package. Could you please help me with that?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >