Search Results

Search found 1596 results on 64 pages for 'alias'.

Page 28/64 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • Keytool and SSL Apache config

    - by Safari
    I have a question about SSL certificate... I have generate a certificate using this keytool command.. keytool -genkey -alias myalias -keyalg RSA -keysize 2048 and I used this command to export the certificate keytool -export -alias myalias -file certificate.crt So, I have a file .crt Now I would to configure my Apache ssl module. I need to use keytool...At the moment I can't to use Openssl How can I configure the module if I have only this certificate.crt file? I see these sections in my ssl.conf # Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If # the certificate is encrypted, then you will be prompted for a # pass phrase. Note that a kill -HUP will prompt again. A new # certificate can be generated using the genkey(1) command. #SSLCertificateFile /etc/pki/tls/certs/localhost.crt # Server Private Key: # If the key is not combined with the certificate, use this # directive to point at the key file. Keep in mind that if # you've both a RSA and a DSA private key you can configure # both in parallel (to also allow the use of DSA ciphers, etc.) #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt How can I configure the correct section?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • How can I make non-anti-aliased text look good in Firefox on Mac OS X?

    - by cosmic.osmo
    After being a Windows user for the last 10 years, I got a MacBook Pro, which I'm working on configuring to my liking. I find small-size anti-aliased text to be blurry and hard to read, so I typically disable it. I've found the settings in the General Control Panel, and used TinkerTool to increase the anti-alias threshold size to 18pt. Mac OS X and other applications appear to respect these settings. A problem appears when I use Firefox. By default, it's configured to ignore the Mac OS anti-alias settings. This is changed by going to about:config, and setting gfx.use_text_smoothing_setting = true (default is false). However, even with this setting, it appears Firefox is still rendering the fonts under the assumption that they will be anti-aliased, which results in very odd and uneven spacing, as you can see in this example (pay attention to the placement of the "s" in "Disable"): With anti-aliasing: Without anti-aliasing: How can I configure Firefox to both not use anti-aliasing and to use correct font spacing? I'm using Mac OS X Lion and Firefox 5.

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Enable SSL with Jetty 8

    - by Jerec TheSith
    I received certificates from GoDaddy an I'm trying to enable SSL with Jetty but receive an error 107 SSL protocol error when connecting to https://server.com:8443 I generated the keystore using these commands : keytool -keystore keystore -import -alias gd_bundle -trustcacerts -file gd_bundle.crt keytool -keystore keystore -import -alias server.com -trustcacerts -file server.com.crt and placed it in /opt/jetty/etc/ And used the following configuration in jetty.xml : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector"> <Arg> <New class="org.eclipse.jetty.http.ssl.SslContextFactory"> <Set name="keyStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="keyStorePassword">**password1**</Set> <Set name="keyManagerPassword">**password1**</Set> <Set name="trustStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="trustStorePassword">**password1**</Set> </New> </Arg> <Set name="port">8443</Set> <Set name="maxIdleTime">30000</Set> <Set name="Acceptors">2</Set> <Set name="statsOn">false</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> Am I missing something in jetty's configuration ?

    Read the article

  • 400 error with nginx subdomains over https

    - by aquavitae
    Not sure what I'm doing wrong, but I'm trying to get gunicorn/django through nginx using only https. Here is my nginx configuration: upstream app_server { server unix:/srv/django/app/run/gunicorn.sock fail_timeout=0; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name app.mydomain.com; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; client_max_body_size 4G; access_log /srv/django/app/logs/nginx-access.log; error_log /srv/django/app/logs/nginx-error.log; location /static/ { alias /srv/django/app/data/static/; } location /media/ { alias /wrv/django/app/data/media/; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; proxy_pass http://app_server; } } I get a 400 error on app.mydomain.com, but the app is published on mydomain.com. Is there an error in my configuration?

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Timeperiod Setting

    - by Alvin G. Matunog
    I am running Nagios 3.2.3 on CentOS 5.7 32bit and I have a bit of a problem scheduling timeperiods. Please help me. Objective, to stop monitoring during server restart (since the server will be restarted and will be restarted automatically because of updates and system backup). The restart is scheduled every 1st Sunday of every month. My monitoring runs 24x7 but during restarts I want to stop the monitoring and resume after 30 mins. So every 1st Sunday I want my schedule to be 00:00-11:30,12:00-24:00. This means that it will stop at 11:30 and resumes on 12:00nn. If I set this on every Sunday there is no problem. But if I set this time on every 1st Sundays, it stops at 11:30 but resumes on the next day (Monday) and not on 12:00nn Sunday. I don't know what I am missing. If I set on regular weekdays there is no problem. But on Offset weekdays (1st Sunday) it doesn`t work the way it should have. Here is my definition; define timeperiod{ name 1st_sunday timeperiod_name 1st_sunday alias No monitoring every 1st Sunday thursday 1 00:00-11:30,12:00-24:00 } define timeperiod{ timeperiod_name irregular alias regular checking use 1st_sunday sunday 08:30-22:00 monday 08:30-22:00 tuesday 08:30-22:00 wednesday 08:30-22:00 thursday 08:30-19:10,19:20-22:00 friday 08:30-22:00 saturday 08:30-22:00 } Can anyone help me? Please? Thank you.

    Read the article

  • apache subdomain configuration

    - by terrid25
    I seem to be having a small problem with setting up a subdomain in apache under CentOS. I have the following: <VirtualHost *:80> ServerName www.domain.co.uk ServerAlias domain.co.uk dev.domain.co.uk DocumentRoot "/var/www/html/domain/web" DirectoryIndex index.php Alias /sf /var/www/html/symfony14/web/sf <Directory "/var/www/html/domain/web"> AllowOverride All Allow from All </Directory> </VirtualHost> <Directory "/var/www/html/symfony14/web/sf"> AllowOverride All Allow from All </Directory> <VirtualHost *:80> ServerName test.domain.co.uk DocumentRoot "/var/www/html/domain_test/web" DirectoryIndex index.php Alias /sf /var/www/html/symfony14/web/sf <Directory "/var/www/html/domain_test/web"> AllowOverride All Allow from All </Directory> </VirtualHost> So going to www.domain.co.uk and domain.co.uk display the contents from /var/www/html/domain, but going to test.domain.co.uk also displays the same folder contents. Is this because of the ServerAlias ? Thanks

    Read the article

  • Spring Security beginner's question. Build failed

    - by Nitesh Panchal
    Hello, I downloaded all jar files for Spring Security 3.0 and added them to my lib folder in Netbeans 6.8. Then i added Spring framework to my web application and tried to modify applicationContext.xml as given in the pdf that shipped with Spring Security. This is it's code :- <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:security="http://www.springframework.org/schema/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <http auto-config='true'> <intercept-url pattern="/**" access="ROLE_USER" /> </http> <authentication-manager> <authentication-provider> <user-service> <user name="jimi" password="jimispassword" authorities="ROLE_USER, ROLE_ADMIN" /> <user name="bob" password="bobspassword" authorities="ROLE_USER" /> </user-service> </authentication-provider> </authentication-manager> <!--bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/jdbc.properties" /> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}" /--> <!-- ADD PERSISTENCE SUPPORT HERE (jpa, hibernate, etc) --> </beans> This is my web.xml :- <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <listener> <listener-class> org.springframework.web.context.request.RequestContextListener </listener-class> </listener> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <welcome-file-list> <welcome-file>redirect.jsp</welcome-file> </welcome-file-list> </web-app> My web application doesn't compile. I simply keep getting build failed. This is the stacktrace :- INFO: PWC1412: WebModule[/SpringSecurityDemo] ServletContext.log():Initializing Spring root WebApplicationContext INFO: Root WebApplicationContext: initialization started INFO: Refreshing org.springframework.web.context.support.XmlWebApplicationContext@108026d: display name [Root WebApplicationContext]; startup date [Mon Mar 22 18:23:37 PDT 2010]; root of context hierarchy INFO: Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] SEVERE: Context initialization failed org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 11 in XML document from ServletContext resource [/WEB-INF/applicationContext.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'http'. One of '{"http://www.springframework.org/schema/beans":description, "http://www.springframework.org/schema/beans":import, "http://www.springframework.org/schema/beans":alias, "http://www.springframework.org/schema/beans":bean, WC[##other:"http://www.springframework.org/schema/beans"]}' is expected. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:369) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:313) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:290) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:142) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:158) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:97) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:411) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:338) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:251) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:190) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4591) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5193) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'http'. One of '{"http://www.springframework.org/schema/beans":description, "http://www.springframework.org/schema/beans":import, "http://www.springframework.org/schema/beans":alias, "http://www.springframework.org/schema/beans":bean, WC[##other:"http://www.springframework.org/schema/beans"]}' is expected. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:131) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:384) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:318) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:410) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3165) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1777) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:685) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:400) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:225) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:78) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:361) ... 53 more SEVERE: PWC1306: Startup of context /SpringSecurityDemo failed due to previous errors SEVERE: PWC1305: Exception during cleanup after start failed org.apache.catalina.LifecycleException: PWC2769: Manager has not yet been started at org.apache.catalina.session.StandardManager.stop(StandardManager.java:892) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:5383) at com.sun.enterprise.web.WebModule.stop(WebModule.java:530) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5211) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) SEVERE: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 11 in XML document from ServletContext resource [/WEB-INF/applicationContext.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'http'. One of '{"http://www.springframework.org/schema/beans":description, "http://www.springframework.org/schema/beans":import, "http://www.springframework.org/schema/beans":alias, "http://www.springframework.org/schema/beans":bean, WC[##other:"http://www.springframework.org/schema/beans"]}' is expected. at org.apache.catalina.core.StandardContext.start(StandardContext.java:5216) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 11 in XML document from ServletContext resource [/WEB-INF/applicationContext.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'http'. One of '{"http://www.springframework.org/schema/beans":description, "http://www.springframework.org/schema/beans":import, "http://www.springframework.org/schema/beans":alias, "http://www.springframework.org/schema/beans":bean, WC[##other:"http://www.springframework.org/schema/beans"]}' is expected. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:369) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:313) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:290) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:142) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:158) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:97) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:411) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:338) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:251) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:190) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4591) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5193) ... 38 more Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'http'. One of '{"http://www.springframework.org/schema/beans":description, "http://www.springframework.org/schema/beans":import, "http://www.springframework.org/schema/beans":alias, "http://www.springframework.org/schema/beans":bean, WC[##other:"http://www.springframework.org/schema/beans"]}' is expected. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:131) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:384) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:318) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:410) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3165) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1777) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:685) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:400) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:225) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:78) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:361) ... 53 more

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • Exchange 2010 PST-Export fails

    - by Chake
    I'm horribly failing at exporting Exchnange Mailboxes to PST files. Perhaps You are able to help me? The System I'm running some legacy machines here. The one I'm currently working on (CurrentDC) is a Windows 2008 R2 Server with Exchange 2010 on it. Exchange seems to be poorly patched: [PS] C:\>get-exchangeserver Name Site ServerRole Edition AdminDisplayVersion ---- ---- ---------- ------- ------------------- OldDC None Enterprise Version 6.5 (Bui... CurrentDC company.local Mailbox,... Enterprise Version 14.0 (Bu... The Problem After some trouble I managed to get the Export-Mailbox command run: [PS] C:\>Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport According to several Websites that seems to be the right command to export the mailbox of the user "marco" to "C:\ExchangeExport". But after running the command an error occurs (I'm sorry, it is the german version of Windows 2008 - but if you translate Fehler with error and Vorgang with process you should be prepared enough to go ;)) [PS] C:\Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport Fehler für Marco S ([email protected]). Ursache: Fehler bei diesem Vorgang., Fehlercode: -2147467259. + CategoryInfo : InvalidOperation: (0:Int32) [Export-Mailbox], RecipientTaskException + FullyQualifiedErrorId : 2317FD3A,Microsoft.Exchange.Management.RecipientTasks.ExportMailbox RunspaceId : 44415363-371e-44a1-a682-61e6a9b90c86 Identity : company.local/Company User/Marco S DistinguishedName : CN=Marco S,OU=Company User,DC=company,DC=local DisplayName : Marco S Alias : marco LegacyExchangeDN : /o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco PrimarySmtpAddress : [email protected] SourceServer : CurrentDC.company.local SourceDatabase : Mailbox Database 0279110169 SourceGlobalCatalog : CurrentDC SourceDomainController : TargetGlobalCatalog : CurrentDC TargetDomainController : TargetMailbox : TargetServer : TargetDatabase : MailboxSize : 0 B (0 bytes) IsResourceMailbox : False SIDUsedInMatch : SMTPProxies : SourceManager : SourceDirectReports : SourcePublicDelegates : SourcePublicDelegatesBL : SourceAltRecipient : SourceAltRecipientBL : SourceDeliverAndRedirect : MatchedTargetNTAccountDN : IsMatchedNTAccountMailboxEnabled : MatchedContactsDNList : TargetNTAccountDNToCreate : TargetManager : TargetDirectReports : TargetPublicDelegates : TargetPublicDelegatesBL : TargetAltRecipient : TargetAltRecipientBL : TargetDeliverAndRedirect : Options : Default SourceForestCredential : TargetForestCredential : TargetFolder : PSTFilePath : C:\ExchangeExport\marco.pst RecoveryMailboxGuid : RecoveryMailboxLegacyExchangeDN : RecoveryMailboxDisplayName : RecoveryDatabaseGuid : StandardMessagesDeleted : 0 AssociatedMessagesDeleted : 0 DumpsterMessagesDeleted : 0 MoveType : ExportToPST MoveStage : Validation StartTime : 05.10.2012 13:55:46 EndTime : 05.10.2012 13:55:46 StatusCode : -2147467259 StatusMessage : Fehler bei diesem Vorgang. ReportFile : C:\Program Files\Microsoft\Exchange Server\V14\Logging\MigrationLogs\export-Mailbox20121005-135545-8170000.xml ServerName : CurrentDC.company.local What I have done Well, I must say I'm quite clueless. I was wondering why MailboxSize is 0 - so I checked it: [PS] C:\>Get-MailboxStatistics marco | ft DisplayName, TotalItemSize, ItemCount DisplayName TotalItemSize ItemCount ----------- ------------- --------- Marco S 473 MB (496,011,572 bytes) 4173 Well, this i not 0 bytes - but I don't know what to do with this information. Also I had a look at the ReportFile mentioned in the output: <?xml version="1.0"?> <export-Mailbox> <TaskHeader> <RunningAs>NT-AUTORITÄT\SYSTEM</RunningAs> <Name>export-Mailbox</Name> <Type>ExportToPST</Type> <MaxBadItems>0</MaxBadItems> <Version>14.0.639.21</Version> <StartTime>10.05.2012 14:19:12</StartTime> <Options Identity="marco" PSTFolderPath="C:\ExchangeExport" DeleteContent="False" DeleteAssociatedMessages="False" GlobalCatalog="CurrentDC" MaxThreads="4" BadItemLimit="0" ValidateOnly="False" IncludeFolders="" ExcludeFolders="" StartDate="01.01.0001 00:00:00" EndDate="31.12.9999 23:59:59" SubjectKeywords="" ContentKeywords="" AllContentKeywords="" AttachmentFilenames="" SenderKeywords="" RecipientKeywords="" Locale="" /> </TaskHeader> <TaskDetails> <Item MailboxName="Marco S"> <Source> <Identity>company.local/Company User/Marco S</Identity> <DistinguishName>CN=Marco Sc,OU=Company User,DC=company,DC=local</DistinguishName> <DisplayName>Marco S</DisplayName> <Alias>marco</Alias> <LegacyExchangeDN>/o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco</LegacyExchangeDN> <PrimarySmtpAddress>[email protected]</PrimarySmtpAddress> <SourceServer>CurrentDC.company.local</SourceServer> <SourceDatabase>Mailbox Database 0279110169</SourceDatabase> <IsResourceMailbox>False</IsResourceMailbox> <SourceGlobalCatalog>CurrentDC</SourceGlobalCatalog> </Source> <Target> <PSTFilePath>C:\ExchangeExport\marco.pst</PSTFilePath> </Target> <MailboxSize>0 B (0 bytes)</MailboxSize> <Duration>00:00:00</Duration> <Result IsWarning="False" ErrorCode="-2147467259">Fehler bei diesem Vorgang.</Result> </Item> </TaskDetails> <TaskFooter> <EndTime>10.05.2012 14:19:13</EndTime> <TotalSize>0 B (0 bytes)</TotalSize> <StandardMessagesDeleted>0</StandardMessagesDeleted> <AssociatedMessagesDeleted>0</AssociatedMessagesDeleted> <DumpsterMessagesDeleted>0</DumpsterMessagesDeleted> <Result ErrorCount="1" CompletedCount="0" WarningCount="0" /> </TaskFooter> </export-Mailbox> Do you have any clue? <UPDATE> Regarding to the answer from downthepub I tried to use UNC paths - no change. Also I tried installing the management tools to a client and run the scripts from there - no way, too. </UPDATE> Thanks a lot for reading this mess!

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • Creating PDF documents dynamically using Umbraco and XSL-FO part 2

    - by Vizioz Limited
    Since my last post I have made a few modifications to the PDF generation, the main one being that the files are now dynamically renamed so that they reflect the name of the case study instead of all being called PDF.PDF which was not a very helpful filename, I just wanted to get something live last week, so decided that something was better than nothing :)The issue with the filenames comes down to the way that the PDF's are being generated by using an alternative template in Umbraco, this means that all you need to do is add " /pdf " to the end of a case study URL and it will create a PDF version of the case study. The down side is that your browser will merrily download the file and save it as PDF.PDF because that is the name of the last part of the URL.What you need to do is set the content-disposition header to be equal to the name you would like the file use, Darren Ferguson mentioned this on the Change the name of the PDF forum post.We have used the same technique for downloading dynamically generated excel files, so I thought it would be useful to create a small macro to set both this header and also to set the caching headers to prevent any caching issues, I think in the past we have experienced all possible issues, including various issues where IE behaves differently to other browsers when you are using SSL and so the below code should work in all situations!The template for the PDF alternative template is very simple:<%@ Master Language="C#" MasterPageFile="~/umbraco/masterpages/default.master" AutoEventWireup="true" %><asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolderDefault" runat="server"> <umbraco:Macro Alias="PDFHeaders" runat="server"></umbraco:Macro> <umbraco:Macro xsl="FO-CaseStudy.xslt" Alias="PDFXSLFO" runat="server"></umbraco:Macro></asp:Content>The following code snippet is the XSLT macro that simply creates our file name and then passes the file name into the helper function:<xsl:template match="/"> <xsl:variable name="fileName"> <xsl:text>Vizioz_</xsl:text> <xsl:value-of select="$currentPage/@nodeName" /> <xsl:text>_case_study.pdf</xsl:text> </xsl:variable> <xsl:value-of select="Vizioz.Helper:AddDocumentDownloadHeaders('application/pdf', $fileName)"/> </xsl:template>And the following code is the helper function that clears the current response and adds all the appropriate headers:public static void AddDocumentDownloadHeaders(string contentType, string fileName){ HttpResponse response = HttpContext.Current.Response; HttpRequest request = HttpContext.Current.Request; response.Clear(); response.ClearHeaders(); if (request.IsSecureConnection & request.Browser.Browser == "IE") { // Don't use the caching headers if the browser is IE and it's a secure connection // see: http://support.microsoft.com/kb/323308 } else { // force not using the cache response.AppendHeader("Cache-Control", "no-cache"); response.AppendHeader("Cache-Control", "private"); response.AppendHeader("Cache-Control", "no-store"); response.AppendHeader("Cache-Control", "must-revalidate"); response.AppendHeader("Cache-Control", "max-stale=0"); response.AppendHeader("Cache-Control", "post-check=0"); response.AppendHeader("Cache-Control", "pre-check=0"); response.AppendHeader("Pragma", "no-cache"); response.Cache.SetCacheability(HttpCacheability.NoCache); response.Cache.SetNoStore(); response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1)); } response.AppendHeader("Expires", DateTime.Now.AddMinutes(-1).ToLongDateString()); response.AppendHeader("Keep-Alive", "timeout=3, max=993"); response.AddHeader("content-disposition", "attachment; filename=\"" + fileName + "\""); response.ContentType = contentType;}I will write another blog soon with some more details about XSL-FO and how to create the PDF's dynamically.Please do re-tweet if you find this interest :)

    Read the article

  • OBIEE 11.1.1 - User Interface (UI) Performance Is Slow With Internet Explorer 8

    - by Ahmed A
    The OBIEE 11g UI is performance is slow in IE 8 and faster in Firefox.  For VPN or WAN users, it takes long time to display links on Dashboards via IE 8. Cause is IE 8 generates many HTTP 304 return calls and this caused the 11g UI slower when compared to the Mozilla FireFox browser. To resolve this issue, you can implement HTTP compression and caching. This is a best practice.Why use Web Server Compression / Caching for OBIEE? Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time. Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times. This screen shot depicts the flow and where the compression and decompression occurs: Solution: a. How to Enable HTTP Caching / Compression in Oracle HTTP Server (OHS) 11.1.1.x 1. To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to "OBIEE Enterprise Deployment Guide for Oracle Business Intelligence" document for details). 2. On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing. This file is located in the OHS installation directory.For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs13. In httpd.conf file, verify that the following directives are included and not commented out: LoadModule expires_module "${ORACLE_HOME}/ohs/modules/mod_expires.soLoadModule deflate_module "${ORACLE_HOME}/ohs/modules/mod_deflate.so 4. Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS: Note: For the Windows platform, you will need to enclose any paths in double quotes ("), for example:Alias "/analytics ORACLE_HOME/bifoundation/web/app"<Directory "ORACLE_HOME/bifoundation/web/app"> Alias /analytics ORACLE_HOME/bifoundation/web/app#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path<Directory ORACLE_HOME/bifoundation/web/app>#We don't generate proper cross server ETags so disable themFileETag noneSetOutputFilter DEFLATE# Don't compress imagesSetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary<FilesMatch "\.(gif|jpeg|png|js|x-javascript|javascript|css)$">#Enable future expiry of static filesExpiresActive onExpiresDefault "access plus 1 week"     #1 week, this will stops the HTTP304 calls i.e. generated by IE 8Header set Cache-Control "max-age=604800"</FilesMatch>DirectoryIndex default.jsp</Directory>#Restrict access to WEB-INF<Location /analytics/WEB-INF>Order Allow,DenyDeny from all</Location> Note: Make sure you replace above placeholder "ORACLE_HOME" to your correct path for BI ORACLE_HOME.For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app Important Notes: Above caching rules restricted to static files found inside the /analytics directory(/web/app). This approach is safer instead of setting static file caching globally. In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content. If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance. The following screen shot summarizes the before and after results and improvements after enabling compression and caching:

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >