Search Results

Search found 6030 results on 242 pages for 'exists'.

Page 213/242 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • Varnish cached 'MISS status' object?

    - by Hesey
    My site uses nginx, varnish, jboss. And some url will be cached by varnish, it depends a response header from jboss. The first time, jboss tells varnish doesn't cache this url. Then the second request, jboss tells varnish to cache, but varnish won't cache it. I used varnishstat and found that 1 object is cached in Varnish, is that the 'MISS status' object? I remove grace code and the problem still exists. When I PURGE this url, varnish works fine and cache the url then. But I can't PURGE so much urls every startup time, how can I fix this? The configuration: acl local { "localhost"; } backend default { .host = "localhost"; .port = "8080"; .probe = { .url = "/preload.htm"; .interval = 3s; .timeout = 1s; .window = 5; .threshold = 3; } } sub vcl_deliver { if (req.request == "PURGE") { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Content-Type; remove resp.http.Server; remove resp.http.Date; remove resp.http.Accept-Ranges; remove resp.http.Connection; set resp.http.keeplive="true"; } else { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } } sub vcl_recv { if(req.url ~ "/check.htm"){ error 404 "N"; } if( req.http.host ~ "store." || req.request == "POST"){ return (pipe); } if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 10m; } set req.http.x-cacheKey = "0"; if(req.url ~ "/shop/view_shop.htm" || req.url ~ "/shop/viewShop.htm" || req.url ~ "/index.htm"){ if(req.url ~ "search=y"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; }else if(req.url !~ "bbs=y" && req.url !~ "shopIntro=y" && req.url !~ "shop_intro=y"){ set req.http.x-cacheKey = req.http.host + "/index.htm"; } }else if(req.url ~ "/search"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; } if( req.http.x-cacheKey == "0" && req.url !~ "/i/"){ return (pipe); } if (req.request == "PURGE") { if (client.ip ~ local) { return (lookup); } else { error 405 "Not allowed."; } } if (req.url ~ "/i/") { set req.http.x-shop-url = req.original_url; }else { unset req.http.cookie; } } sub vcl_fetch { set beresp.grace = 10m; #unset beresp.http.x-cacheKey; if (req.url ~ "/i/" || req.url ~ "status" ){ set beresp.ttl = 0s; /* ttl=0 for dynamic content */ } else if(beresp.http.x-varnish-cache != "1"){ set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 0s; unset beresp.http.set-cookie; } else { set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 1800s; unset beresp.http.set-cookie; } } sub vcl_hash { hash_data(req.http.x-cacheKey); return (hash); } sub vcl_error { if (req.request == "PURGE") { return (deliver); } else { set obj.http.Content-Type = "text/html; charset=gbk"; synthetic {"<!--ve-->"}; return (deliver); } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "N"; } }

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. ** Of interest** I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list ‘\Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • CakePhp on IIS: How can I Edit URL Rewrite module for SSL Redirects

    - by AdrianB
    I've not dealt much with IIS rewrites, but I was able to import (and edit) the rewrites found throughout the cake structure (.htaccess files). I'll explain my configuration a little, then get to the meat of the problem. So my Cake php framework is working well and made possible by the url rewrite module 2.0 which I have successfully installed and configured for the site. The way cake is set up, the webroot folder (for cake, not iis) is set as the default folder for the site and exists inside the following hierarchy inetpub -wwwroot --cakePhp root ---application ----models ----views ----controllers ----WEBROOT // *** HERE *** ---cake core --SomeOtherSite Folder For this implementation, the url rewrite module uses the following rules (from the web.config file) ... <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^$" ignoreCase="false" /> <action type="Rewrite" url="/" /> </rule> <rule name="Imported Rule 3" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <action type="Rewrite" url="/{R:1}" /> </rule> <rule name="Imported Rule 4" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> I've Installed my SSL certificate and created a site binding so that if i use the https:// protocol, everything is working fine within the site. I fear that attempts I have made at creating a rewrite are too far off base to understand results. The rules need to switch protocol without affecting the current set of rules which pass along url components to index.php (which is cake's entry point). My goal is this- Create a couple of rewrite rules that will [#1] redirect all user pages (in this general form http://domain.com/users/page/param/param/?querystring=value ) to use SSL and then [#2} direct all other https requests to use http (is this is even necessary?). [e.g. http://domain.com/users/login , http://domain.com/users/profile/uid:12345 , http://domain.com/users/payments?firsttime=true] ] to all use SSL [e.g. https://domain.com/users/login , https://domain.com/users/profile/uid:12345 , https://domain.com/users/payments?firsttime=true] ] Any help would be greatly appreciated.

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • No device file for partition on logical volume (Linux LVM)

    - by Brian
    I created a logical volume (scandata) containing a single ext3 partition. It is the only logical volume in its volume group (case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (/dev/sdb). When I created it, I could mount the partition via the block device /dev/mapper/case4t-scandatap1. Since last reboot the aforementioned block device file has disappeared. It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from sudo-ing. That issue has been (temporarily) rectified via this operation. Now I'll cut the chatter and get started with the terminal dumps: $ sudo pvs; sudo vgs; sudo lvs Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache $ sudo vgchange -a y Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache $ ls /dev | grep case4t case4t $ ls /dev/mapper case4t-scandata control $ sudo fdisk -l /dev/case4t/scandata Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux $ sudo parted /dev/case4t/scandata print Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3 $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order $ sudo parted /dev/sdb print Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV. To reiterate, neither /dev/mapper/case4t-scandatap1 (which I had used previously) nor /dev/case4t/scandata1 (as printed by fdisk) exists. And /dev/case4t/scandata (no partition number) cannot be mounted: $sudo mount -t ext3 /dev/case4t/scandata /mnt/new mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so All I get on syslog is: [170059.538137] VFS: Can't find ext3 filesystem on dev dm-0. Thanks in advance for any help you can offer, Brian P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).

    Read the article

  • gallery2 and nginx with rewrite return file not found for file name with space (or + sign in url)

    - by Vangel
    I have setup nginx with gallery2 on an internal server. Everything works fine under apache2 which I checked first, it used to be on apache2 Problem is: gallery2 seems to generate url with + sign in it for file names/ images which had spaces in it so a file like "may report.jpg" becomes "may+report.jpg" The URL rewrite works but gallery2 throws an error for file not found. THis does not happen under apache2. Here is my nginx rewrite rule: location / { index main.php index.html; default_type text/html; # If the file exists as a static file serve it # directly without running all # the other rewite tests on it if (-f $request_filename) { break; } } location /v/ { # if ($request_uri !~ /main.php) # { rewrite ^/v/(.*)$ /main.php?g2_view=core.ShowItem&g2_path=$1 last; # } } location /d/ { if ($request_uri !~ /main.php) { rewrite ^/d/([0-9]+)-([0-9]+)/(.*)$ /main.php?g2_view=core.DownloadItem&g2_itemId=$1&g2_serialNumber=$2&g2_fileName=$3 last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:8889; fastcgi_index main.php; fastcgi_intercept_errors on; # to support 404s for PHP files not found fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param SERVER_NAME $host; fastcgi_read_timeout 300; } the sit on its own works fine. only the images with spaces in file name do not display in album view and also when clicking the the image for full page view will throw this error Error (ERROR_MISSING_OBJECT) : Parent 103759 path report+april+456.flv in modules/core/classes/helpers/GalleryFileSystemEntityHelper_simple.class at line 98 (GalleryCoreApi::error) in modules/core/classes/GalleryCoreApi.class at line 1853 (GalleryFileSystemEntityHelper_simple::fetchChildIdByPathComponent) in modules/core/classes/helpers/GalleryFileSystemEntityHelper_simple.class at line 53 (GalleryCoreApi::fetchChildIdByPathComponent) in modules/core/classes/GalleryCoreApi.class at line 1804 (GalleryFileSystemEntityHelper_simple::fetchItemIdByPath) in modules/rewrite/classes/RewriteSimpleHelper.class at line 45 (GalleryCoreApi::fetchItemIdByPath) in ??? at line 0 (RewriteSimpleHelper::loadItemIdFromPath) in modules/rewrite/classes/RewriteUrlGenerator.class at line 103 in modules/rewrite/classes/parsers/modrewrite/ModRewriteUrlGenerator.class at line 37 (RewriteUrlGenerator::_onLoad) in init.inc at line 147 (ModRewriteUrlGenerator::initNavigation) in main.php at line 180 in main.php at line 94 in main.php at line 83 System Information Gallery version 2.2.4 PHP version 5.3.6 fpm-fcgi Webserver nginx/0.8.55 Database mysqli 5.0.95 Toolkits ImageMagick, Thumbnail, Gd Operating system Linux CentOS-55-64-minimal 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 Browser Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5 In the report above there is usable system information if that helps. I know the nginx is old but it comes as default in centos repo and I am not sure if upgrading will fix the problem or break something else it seems gallery2 must map the + to space internally but why it's not doing so with nginx I can't tell. EDIT: I just verified that if I change the '+' sign to %20 then gallery2 works. but gallery2 is generating URL as +. I found a (maybe) related problem here for IIS7 and Gallery2 http://forums.asp.net/t/1431951.aspx EDIT2: Accessing the URL without rewrite and having the + sign works. Must be something to do with rewrite. Here is the relevant apache2 rule that might be of help RewriteCond %{THE_REQUEST} /d/([0-9]+)-([0-9]+)/([^/?]+)(\?.|\ .) RewriteCond %{REQUEST_URI} !/main\.php$ RewriteRule . /main.php?g2_view=core.DownloadItem&g2_itemId=%1&g2_serialNumber=%2&g2_fileName=%3 [QSA,L] RewriteCond %{THE_REQUEST} /v/([^?]+)(\?.|\ .) RewriteCond %{REQUEST_URI} !/main\.php$ RewriteRule . /main.php?g2_path=%1 [QSA,L]

    Read the article

  • Troubleshooting unwanted NTP Traffic

    - by Jaxaeon
    A domain controller running Windows Server 2012 is sending NTP and NETBIOS traffic to an address that has never been configured as a time provider. The server logs give no indication that any NTP traffic is failing. The only place I see any evidence of this traffic is in pfSense system logs: (Blocked) Jun 9 08:48:50 DOMAIN 10.0.1.100:123 192.128.127.254:123 UDP (Blocked) Jun 9 08:48:53 DOMAIN 10.0.1.100:137 192.128.127.254:137 UDP As far as I can tell the NTP service is working normally otherwise: DC2.domain.com[10.0.1.101:123]: ICMP: 0ms delay NTP: -0.0131705s offset from DC1.domain.com RefID: DC1.domain.com [10.0.1.100] Stratum: 3 DC1.domain.com *** PDC ***[10.0.1.100:123]: ICMP: 0ms delay NTP: +0.0000000s offset from DC1.domain.com RefID: clock1.albyny.inoc.net [64.246.132.14] Stratum: 2 The time provider NtpClient is currently receiving valid time data from 1.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->204.2.134.163:123). The time provider NtpClient is currently receiving valid time data from 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). The time service is now synchronizing the system time with the time source 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). I've been inside and out of the NTP configuration and cannot find any reason for this traffic. Reverse DNS points the destination address to nothing.attdns.com. pinging nothing.attdns.com from the domain controller in question leads to a response from loopback (127.0.0.2) which makes my head hurt. Any ideas? EDIT1: It should probably be noted that after a dns flush, nslookup 192.128.127.254 returns nothing.attdns.com. 192.128.127.254 is not present in domain.com DNS records. The attdns.com domain is not present in cached lookups. 127.in-addr.arpa is clean of any funkyness. EDIT2: The loopback ping response from nothing.attdns.com is possibly unrelated. Machines on other networks are also displaying this behavior. EDIT3: As mentioned in the comments, I tracked the problem network adapter back to my pfSense VM hosted in esxi 5.5 (I know shame on me for virtualizing a firewall). pfSense was configured to use DC1.domain.com as its primary time provider, but upon changing it back to pool.ntp.org the problem persists. pfSense logs give no indication of NTP misconfiguration. Everywhere I can think to look this VM is identified as 10.0.1.253, so I still have no idea why it’s sending NTP requests as 192.128… Since this firewall was a temporary solution to a problem that no longer exists so I am going to decommission it. EDIT4: The queries were coming from another machine sharing the same virtual adapter as the firewall. The machine has two local adapters: one for LAN, and the other for attached hardware that uses an Ethernet connection. That hardware sits in the the mystery subnet, and the machine is broadcasting NTP requests over both adapters.

    Read the article

  • Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed!

    - by meder
    I restarted my VPS box ( manually/hard restart ) and ever since, mysql fails to start for whatever reason. I did a tail /var/log/syslog and I get this: Feb 20 11:49:33 kyrgyznews mysqld[11461]: ) ;InnoDB: End of page dump 575 Feb 20 11:49:33 kyrgyznews mysqld[11461]: 110220 11:49:33 InnoDB: Page checksum 1045788239, prior-to-4.0.14-form checksum 236985105 576 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: stored checksum 1178062585, prior-to-4.0.14-form stored checksum 236985105 577 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Page lsn 0 10651, low 4 bytes of lsn at page end 10651 578 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Page number (if stored to page already) 3, 579 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: space id (if created with >= MySQL-4.1.1 and stored already) 0 580 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Database page corruption on disk or a failed 581 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: file read of page 3. 582 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: You may have to recover from a backup. 583 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: It is also possible that your operating 584 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: system has corrupted its own file cache 585 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: and rebooting your computer removes the 586 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: error. 587 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: If the corrupt page is an index page 588 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: you can also try to fix the corruption 589 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: by dumping, dropping, and reimporting 590 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: the corrupt table. You can use CHECK 591 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: TABLE to scan your table for corruption. 592 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: See also InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html 593 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: about forcing recovery. 594 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Ending processing because of a corrupt database page. 595 Feb 20 11:49:33 kyrgyznews mysqld_safe[11469]: ended 596 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in 597 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: ^G/usr/bin/mysqladmin: connect to server at 'localhost' failed 598 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' 599 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! 600 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: 601 Feb 20 11:49:56 kyrgyznews mysqld_safe[13437]: started 602 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: The log sequence number in ibdata files does not match 603 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: the log sequence number in the ib_logfiles! 604 Feb 20 11:49:56 kyrgyznews mysqld[13440]: 110220 11:49:56 InnoDB: Database was not shut down normally! 605 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Starting crash recovery. 606 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Reading tablespace information from the .ibd files... 607 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Restoring possible half-written data pages from the doublewrite 608 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: buffer... 609 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Database page corruption on disk or a failed 610 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: file read of page 3. 611 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: You may have to recover from a backup. I have looked at the page it referenced, http://dev.mysql.com/doc/refman/5.0/en/forcing-innodb-recovery.html, but before messing with any settings I was wondering what experienced DBAs would suggest doing? Is there any harm in forcing the recovery? PS - I did not make any updates to mysql. Version is mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2.

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • Difference between `curl -I` and `curl -X HEAD`

    - by chmeee
    I was wathcing the funny server type from http://www.reddit.com with curl -I http://www.reddit.com when I guessed that curl -X HEAD http://www.reddit.com would do the same. But, in fact, it doesn't. I'm curious about why. This is what I observe running the two commands: curl -I: works as expected, outputs the header and exists. curl -X HEAD: does not show anything and seems to wait for user input. But, sniffing with tshark I see the second command actually sends the same HTML query and receives the correct answer, but it does not show it and it doesn't close the connection. curl -I 0.000000 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=47267342 TSER=0 WS=6 0.045392 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=2552532839 TSER=47267342 WS=1 0.045441 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=47267353 TSER=2552532839 0.045623 333.33.33.33 -> 213.248.111.106 HTTP HEAD / HTTP/1.1 0.091665 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=2552532886 TSER=47267353 0.861782 213.248.111.106 -> 333.33.33.33 HTTP HTTP/1.1 200 OK 0.861830 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47267557 TSER=2552533656 0.862127 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [FIN, ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47267557 TSER=2552533656 0.910810 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [FIN, ACK] Seq=321 Ack=156 Win=6432 Len=0 TSV=2552533705 TSER=47267557 0.910880 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=156 Ack=322 Win=6912 Len=0 TSV=47267570 TSER=2552533705 curl -X HEAD 34.106389 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=47275868 TSER=0 WS=6 34.149507 213.248.111.90 -> 333.33.33.33 TCP http > 51690 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=3920268348 TSER=47275868 WS=1 34.149560 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=47275879 TSER=3920268348 34.149646 333.33.33.33 -> 213.248.111.90 HTTP HEAD / HTTP/1.1 34.191484 213.248.111.90 -> 333.33.33.33 TCP http > 51690 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=3920268390 TSER=47275879 34.192657 213.248.111.90 -> 333.33.33.33 TCP [TCP Dup ACK 15#1] http > 51690 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=3920268390 TSER=47275879 34.823399 213.248.111.90 -> 333.33.33.33 HTTP HTTP/1.1 200 OK 34.823453 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47276048 TSER=3920269022 Any idea about why this difference in behaviour?

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. Of interest I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list \Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • vconfig created virtual interface and trunking - is the the interface untagged or tagged for that VLAN ID?

    - by kce
    I am trying to setup an additional VLAN on our Debian-based router/firewall (which exists as a virtual machine on Hyper-V), our core switch (an HP Procurve 5406) and a remote HP ProCurve 2610 that is connected via a WAN Transparent Lan Service (TLS) link. Let's work backwards from the network edge: The Debian server has an external connection attached to eth0. The internal interface is eth1, which is connected directly from our Hyper-V host to the 5406. The port that eth1 is attached to is setup as Trk12. The 2610 is attached to Trk9 (which trunks a whole slew of VLANs - Trk9 is our TLS head). I can successfully ping the management IP addresses for my VLAN from both switches but I cannot ping, from either switch, the virtual interface for my new VLAN on the Debian-base router and firewall. The existing VLAN works fine. What gives? The port eth1 is attached to is a trunk, the existing VLAN (ID 98) is untagged on the trunk, the new VLAN (ID 198) is tagged. VLAN 198 is tagged on Trk9 on the 5406 and on the 2610. I can ping the other switch's management IP (10.100.198.2 and 10.100.198.3) from the other respective switch. That leg of the VLAN works - however I cannot communicate with eth1.198's 10.100.198.1. I feel like I'm missing something elementary but what it is remains illusive to me. I suspect the issue is with the vconfig created eth1.198. It should pass the tagged VLAN 198 packets correct? But they cannot seem to get any further than the 5406. Communication on the existing VLAN 98 works fine. From the Debian box: eth1: eth1 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.0.1 Bcast:10.100.255.255 Mask:255.255.0.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12179786 errors:0 dropped:0 overruns:0 frame:0 TX packets:20210532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1586498028 (1.4 GiB) TX bytes:26154226278 (24.3 GiB) Interrupt:9 Base address:0xec00 eth1.198: eth1.198 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.198.1 Bcast:10.100.198.255 Mask:255.255.255.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1496 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:72 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:3528 (3.4 KiB) # cat /proc/net/vlan/eth1.198: eth1.198 VID: 198 REORDER_HDR: 0 dev->priv_flags: 1 total frames received 0 total bytes received 0 Broadcast/Multicast Rcvd 0 total frames transmitted 72 total bytes transmitted 3528 total headroom inc 0 total encap on xmit 39 Device: eth1 INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0 EGRESS priority mappings: # ip route 10.100.198.0/24 dev eth1.198 proto kernel scope link src 10.100.198.1 206.174.64.0/20 dev eth0 proto kernel scope link src 206.174.66.14 10.100.0.0/16 dev eth1 proto kernel scope link src 10.100.0.1 default via 206.174.64.1 dev eth0 # iptables -L -v Chain INPUT (policy DROP 6875 packets, 637K bytes) pkts bytes target prot opt in out source destination 41 4320 ACCEPT all -- lo any anywhere anywhere 11481 1560K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 107 8058 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- eth1 any 10.100.0.0/24 anywhere tcp dpt:ssh 701 317K ACCEPT udp -- eth1 any anywhere anywhere udp dpts:bootps:bootpc Chain FORWARD (policy DROP 1 packets, 40 bytes) pkts bytes target prot opt in out source destination 156K 25M ACCEPT all -- eth1 any anywhere anywhere 215K 248M ACCEPT all -- eth0 eth1 anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1.198 any anywhere anywhere 0 0 ACCEPT all -- eth0 eth1.198 anywhere anywhere state RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT 13048 packets, 1640K bytes) pkts bytes target prot opt in out source destination From the 5406: # show vlan ports trk12 detail Status and Counters - VLAN Information - for ports Trk12 VLAN ID Name | Status Voice Jumbo Mode ------- -------------------- + ---------- ----- ----- -------- 98 WIFI | Port-based No No Untagged 198 VLAN198 | Port-based No No Tagged

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • mysqld service crashes on restart, after importing mysqldump #innodb

    - by ubunut
    I have 2 mysql servers. Let's call them server01 & server02. Both have the same configuration: mysqladmin Ver 8.42 Distrib 5.1.61, for redhat-linux-gnu on x86_64 [client] default-character-set=utf8 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet = 16M default-character-set=utf8 default-collation=utf8_unicode_ci character-set-server=utf8 collation-server=utf8_unicode_ci default-storage-engine = InnoDB innodb_data_home_dir = /var/lib/mysql innodb_log_group_home_dir = /var/lib/mysql innodb_data_file_path = ibdata1:10M:autoextend innodb_additional_mem_pool_size = 2M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 1 innodb_buffer_pool_size = 700M table_cache = 300 thread_cache_size = 4 query_cache_size = 200m query_cache_limit = 10m [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I make a mysqldump on server01: mysqldump -uuser -ppassword --all-databases testservers.sql (most tables in these databases are innodb, some of the mysql.* tables are Innodb too) Then I import the testservers.sql on server02: mysql -uuser < testservers.sql (mysqld has been started with --skip-network). So far so good, I can login into mysql & everything seems to be ok. BUT when I exit to the shell and execute service mysqld restart, The service fails to start. stack-trace in /var/log/mysqld.log: 121022 14:53:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121022 14:53:19 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 121022 14:53:19 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead. 12:53:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=0 max_threads=151 thread_count=0 connection_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338324 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x267e630 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fff3efe0be0 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84bd89] /usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a0be3] /lib64/libpthread.so.0() [0x338d60f500] /usr/libexec/mysqld(ha_resolve_by_name(THD*, st_mysql_lex_string const*)+0x81) [0x6956e1] /usr/libexec/mysqld(open_table_def(THD*, st_table_share*, unsigned int)+0xe0a) [0x60e5ba] /usr/libexec/mysqld(get_table_share(THD*, TABLE_LIST*, char*, unsigned int, unsigned int, int*)+0x20b) [0x602b0b] /usr/libexec/mysqld() [0x603597] /usr/libexec/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x7a1) [0x6079a1] /usr/libexec/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5d0) [0x608570] /usr/libexec/mysqld(open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0x6a) [0x60877a] /usr/libexec/mysqld(plugin_init(int*, char**, int)+0x622) [0x715af2] /usr/libexec/mysqld() [0x5bd3b2] /usr/libexec/mysqld(main+0x1b3) [0x5bfc93] /lib64/libc.so.6(__libc_start_main+0xfd) [0x338d21ecdd] /usr/libexec/mysqld() [0x5087b9] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 121022 14:53:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended A typical mysqdump entry looks like this: DROP TABLE IF EXISTS `adodb_logsql`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `adodb_logsql` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `created` datetime NOT NULL, `sql0` varchar(250) NOT NULL DEFAULT '', `sql1` text, `params` text, `tracer` text, `timer` decimal(16,6) NOT NULL DEFAULT '0.000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='to save some logs from ADOdb'; /*!40101 SET character_set_client = @saved_cs_client */; IF I change all occurrences of "ENGINE=InnoDB" to "ENGINE=MyISAM" before import, then the service has no problem restarting. I'm quite puzzled as to what's happening, maybe I'm just an idiot, then by all means tell me so. Any help would be greatly appreciated!

    Read the article

  • Reinstall Postfix

    - by Kevin
    I tried to reinstall Postfix, but I get this bunch of errors: root@***:/etc/init.d# sudo apt-get install -f postfix Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre resolvconf postfix-cdb mail-reader The following NEW packages will be installed: postfix 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/1,389kB of archives. After this operation, 3,531kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package postfix. (Reading database ... 56122 files and directories currently installed.) Unpacking postfix (from .../postfix_2.7.1-1ubuntu0.1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for ufw ... Processing triggers for man-db ... Setting up postfix (2.7.1-1ubuntu0.1) ... Configuration file `/etc/init.d/postfix' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** postfix (Y/I/N/O/D/Z) [default=N] ? Y Installing new version of config file /etc/init.d/postfix ... Adding group `postfix' (GID 109) ... Done. Adding system user `postfix' (UID 106) ... Adding new user `postfix' (UID 106) with group `postfix' ... Not creating home directory `/var/spool/postfix'. Creating /etc/postfix/dynamicmaps.cf Adding tcp map entry to /etc/postfix/dynamicmaps.cf Adding group `postdrop' (GID 115) ... Done. setting myhostname: ***.net setting alias maps setting alias database setting myorigin setting destinations: ***.net, localhost.***.net, , localhost setting relayhost: setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 setting mailbox_size_limit: 0 setting recipient_delimiter: + setting inet_interfaces: all Postfix is now set up with a default configuration. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases postalias: fatal: /etc/mailname: cannot open file: Permission denied dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) I tried aptitude purge, remove, autoclean and all of dpkg options (configure, remove, purge) but nothing did the trick. /etc/mailname exists (0644 root:root) with as content *.net (fetched from hostname). What am I doing wrong?

    Read the article

  • Windows 7, HTTPS WebDav: Asks for password twice and fails. Any workarounds?

    - by AutoDMC
    Howdy. I have a Dav server running with PHP SabreDav (code.google.com/p/sabredav/wiki/Windows) on Cherokee at an HTTPS secured URL. It's set to use https, and uses Digest Authentication. I can log in with multiple browsers and a few third party clients (BitKinex and Java AnyClient can connect and browse as well, caveats below). However, when attempting to log in with Windows 7 (surprise, surprise), it asks for my password twice, then tells me that my folder is invalid. I have verified that the server is using Digest authentication. I've verified multiple times that third party software can connect. I even went out and bought a GoDaddy SSL certificate so my SSL wouldn't be self signed anymore. I've applied the registry hacks here: support.microsoft.com/kb/943280 (Note that the article says the "fix" already exists for Windows 7, I just need magical registry hax to get it to work) I've applied the registry hacks here: support.microsoft.com/kb/941050 I've applied the registry hacks here: support.microsoft.com/kb/841215 (Supposedly allows Basic Auth, which shouldn't apply, but why not?) All to no avail; Windows continues to ask for my password twice, then state that "The folder you entered does not appear to be valid. Please choose another." Try the command line? Sure: I've attempted to access with NET USE "https://dav.example.com/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/" (System error 1790) I've attempted to access with NET USE "https://dav.example.com/subdir/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/subdir/" (System error 1790) For good luck: ping dav.example.com ... works. And again, web browsers can access the share just fine, so can third party tools. Best I can tell at this point is "HAHA, NO WEBDAV FOR YOU ON WINDOWS 7" which would be fine except everyone who will be using this application... uses Windows 7. And most are not as persistent or pugnacious as I am. I feel like I've burned through every random suggestion I've found anywhere in the first 10 pages of Google on every search term I can think of. Any ideas? I need it to be Webdav, I need it to be over HTTPS, and I really do need a method to access it from Windows 7. EXTRA DETAIL: However, the "third party" programs I've tried have either been buggy, incomplete, or have silly ... "glitches." For example, BitKinex seems to fixate on any http error codes sent, so if there's a glitch reading a directory, BAM, that directory is always listed empty. Long directory listings also show up as blank, even though the transfer panel shows that directory listing is still taking place. In any case, BitKinex is useless for development purposes for the reasons above. And besides, I'm building this for people other than myself, people who will want to get this dav share working "the regular way."

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • batch file to deploy files

    - by Martin Michalak
    hi I have created batch file which pulls info from *.txt file and deploy code from the source to destination: SET Source=%1 if exist %Source% ( ECHO Source for WEB exists ) else ( ECHO Wrong build%Source% doesn't exist GOTO Menu ) SET Server=%2 SET AppPool=%3 SET Destination=%4 SET Folder=%5 SET ENV=%6 SET AppName=%7 SET Envlog=%8 ECHO Deployment of WEB > %Envlog% %Date% %Time% echo. @ECHO Stopping App Pools @ECHO Stopping App Pools >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE -d \\%Server% cmd.exe /c c:\windows\system32\inetsrv\appcmd STOP apppool /apppool.name:%AppPool% echo. @ECHO App Pools will be stopped in the background @ECHO App Pools will be stopped in the background >> %Envlog% %Date% %Time% Pause echo. IF EXIST "%Destination%" ( ECHO Deleting %AppName% %Folder% RMDIR %Destination% /s /q ECHO Destination Folder %Folder% Deleted ECHO Destination Folder %Folder% Deleted >> %Envlog% %Date% %Time% ) else ( ECHO Destination Folder %Destination% does not exist, please check ECHO Destination Folder %Destination% does not exist, please check >> %Envlog% %Date% %Time% Pause ) echo. @ECHO Starting Robocopy for %AppName% @ECHO Starting Robocopy for %AppName% >> %Envlog% %Date% %Time% echo. START /WAIT /MIN ROBOCOPY.EXE %Source% %Destination% *.* /S /NP /R:3 /W:5 /LOG:"Logs\Robo%AppName%%ENV%.log" D:\Tools\Windiff\windiff.exe %Source% %Destination% echo. @ECHO Finished with Robocopy @ECHO Finished with Robocopy >> %Envlog% %Date% %Time% echo. @ECHO Checking if App pools stopped: @ECHO Checking if App pools stopped: >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd LIST apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools stopped? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not stop @echo App pools stopped?: %ask% >> %Envlog% %Date% %Time% DEL %Source%\web.config echo. @ECHO Production Config check if exist "%Destination%\%ENV%-Web.config" ( echo. ECHO The Application production configuration file does exist. ECHO The Application production configuration file does exist. >> %Envlog% %Date% %Time% COPY %Destination%\%ENV%-Web.config web.config echo. ECHO Production %ENV%-Web.config has been renamed to web.config ECHO Production %ENV%-Web.config has been renamed to web.config >> %Envlog% %Date% %Time% ) else ( ECHO The Application production configuration file is missing in Production %AppName% ECHO The Application production configuration file is missing in Production %AppName% >> %Envlog% %Date% %Time% explorer %Destination% Pause ) echo. @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools >> %Envlog% %Date% %Time% Pause echo. @ECHO Start %AppName% Application Pool >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd START apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools started? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not start @echo App pools started?: %ask% >> %Envlog% %Date% %Time% Pause echo. @ECHO Build Version for %AppName% @ECHO Build Version for %AppName% >> %Envlog% %Date% %Time% type %Destination%\buildinfo.xml echo. ECHO ............................................... @ECHO ...........Deployment Compelted................ @ECHO ...........Deployment Compelted................>> %Envlog% %Date% %Time% ECHO ............................................... here are my issues: Lets say I am running code for 3 servers, then for each instance: For all three servers I am performing destination folder delete even so destination folder is always the same, the code should only delete it in the 1st instance (when code is deployed to first server) then I would prefer if script would check if the code from the source and destination is the same and if it is it should delete the folder or not. Then based on 1: a) deleting web.config and renaming should only happen if code in destination is new b) Robocopy should not override files if they are the same I think there is /Xo option to do that any idea how to achieve that? :)

    Read the article

  • why is my centos, nginx, php,mysql server using so much memory

    - by kb2tfa
    i'm not sure where the problem lies. I have: centos 6 mysql php php-fpm wordpress (1 site). this is a dedicated server i'm learning on. as soon as i ran the web url the memory slammed to 125% of a 512k server, i had to upgrade to 1gig, so i'm not sure where the problem lies. I thought by switching to nginx from apache i would have more memory free, but i'm still stuck with the problem. before launching the site with nginx and mysqld running the site was about 5% memory. I reanamed my "my-small.cnf" to my.cnf and put in my /etc folder where the original was, but that seems not to have done it. after looking at my TOP results, i'm starting to think it may be php-fpm eating my memory, but not sure. is php-fpm the preferend way or is there something better to use. I read about possible memory leaks in php-fpm. here is what i have: php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf end php-fpm.conf

    Read the article

  • Exchange Server 2007 Send and Receive Connectors

    - by Mistiry
    I have gotten awesome advice from users on here for getting Exchange on Windows SBS 2008 set up. I think this is the final piece and I'm ready for roll-out! I need to set up Exchange so that it RECEIVES mail from our existing mail server as a Forward [aliases on the existing mail server to forward mail from [email protected] to [email protected]] (not using the POP3 Connector), and SENDS mail through that server as well (sends from [email protected] to [email protected] and then out to the world, showing in the headers as from [email protected] or at absolute least have the reply-to set as this). Alternatively, as long as the .net email address doesn't show in the From and replies are directed to the .com account, email can go from Exchange to the outside world without directing through the existing mail server. External Domain: domain.com Internal Domain: domain.local Internet Domain Name Set in SBS Console: domain.net When I go to http://remote.domain.net I get the Remote Web Workspace, and can login to both Sharepoint and OWA. I can send an email from OWA to a GMail account. I receive it from [email protected], which is an alias of [email protected]. I cannot, however, send an email from OWA to ANY domain.com email addresses. I am also not receiving any email to this Exchange account (except for NDRs). When I try sending an email to a domain.com account, here is the error (I had to replace all < and with { and }): Delivery has failed to these recipients or distribution lists: [email protected] The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Generating server: IFEXCHANGE.domain.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a]) by IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a%10]) with mapi; Tue, 17 Aug 2010 14:14:14 -0400 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: John Doe {[email protected]} To: "[email protected]" {[email protected]} Date: Tue, 17 Aug 2010 14:14:12 -0400 Subject: asdf Thread-Topic: asdf Thread-Index: AQHLPjf+h6hA5MJ1JUu1WS4I4CiWeA== Message-ID: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} MIME-Version: 1.0 I hope I explained the situation well enough for someone to be able to explain to me what I'm missing. If I could, I'd be putting a 10K bounty, but unfortunately I've got only 74 reputation (hey, I'm a newbie here!). I'm pretty sure the obvious "RecipNotFound" error is why its not working, my question is how to resolve this. The email account exists, it receives mail just fine, yet when I send it from the Exchange server it fails. EDIT In OC-Hub Transport, the Email Address Policies has 2 entries. "Windows SBS Email Address Policy" is set up to: Include All Recipient Types, no conditions, and SMTP %[email protected]. "Default Policy" set to: Include All Recipient Types, no conditions, and SMTP @domain.net. Three Authoritative Accepted domains domain.com domain.local (Default) domain.net Remote Domains tab has two entries. Default with domain * Windows SBS Company Web Domain with domain companyweb.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >