Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 522/677 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • How to migrate Fedora DS (389 DS) to a new machine?

    - by zengr
    Hello, I am trying to migrate a Fedora DS (1.2.2) to a new server (1.2.7.5). The process has been painful to say the least. The old server (1.2.2) was also an upgrade from an old fedora DS setup, so it does not contain migrate-ds-admin.pl. I found this question, but the URL does not open. I am aware that I need to use migrate-ds-admin.pl, but I am clueless. How do I use it? I assume this works like this: 1. Copy migrate-ds-admin.pl from server which has 1.2.7 to 1.2.2 2. Run migrate-ds-admin.pl to export the schema+ldif from 1.2.2 3. Import the schema+ldif to 1.2.7 using migrate-ds-admin.pl. If the above is true, then what parameters are need for export and import? Note: ./ldif2db -n NetscapeRoot -i /root/NetscapeRoot.ldif ./ldif2db -n userRoot -i /root/userRoot.ldif The above two commands work like a charm, but since the schema (custom schema) is not migrated, I see alot of errors during import.

    Read the article

  • SQL Server 2005 SE SP3 on Windows Server 2008 R2 x64 premature query disconnections

    - by southernpost
    New Dell PowerEdge R910, 4x8 Intel X7560, 192GB RAM, hardware NUMA, local RAID, Broadcom NetExtreme II multiport NIC, unteamed, TCP Offload disabled, RSS disabled, NetDMA disabled, Hyperthreading disabled. SQL Server 2005 SE x64 SP3 on Windows Server 2008 R2 EE x64. No other apps on server. Max Mem = 180GB, Max DOP = 4. Existing Windows Server 2003 R2 EE x64 app server connecting to Dell via firewall using SQL Authenticated logins. Symptoms: Intermittent errors at the app server: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) Findings: Running queries from SSMS located on another machine within the same domain as the SQL Server run without error. SQLIO showed good performance. Windows and SQL logs show no related messages. Microsoft reveiwed PssDiag trace and stated that "We are not seeing timeouts from SQL Side. The queries bring run against the database are timing out within 9secs. This is a database connectivity error." "we can also see from the AttnSeq column that we are also not seeing any Attentions from the SQL Side.". Dell has confirmed that we are using the latest Broadcom drivers.

    Read the article

  • Joining new DC to AD - DNS name does not exist

    - by Andrew Connell
    I had a DC fail on me recently and trying to add a new one to my domain, although I'm sensing I might have other issues in my domain. I'm a dev at heart and know just enough about AD to be dangerous so looking for some assistance. My working DC is RIVERCITY-DC12. I'm trying to promote RIVERCITY-DC14 as a DC to the RIVERCITY domain, but when I run DCPROMO, at the NETWORK CREDENTIALS step where I point to the name of the domain (rivercity.local), I get "An AD DC for the domain rivercity.local cannot be contacted" and in the details see "The error was DNS name does not exist" Looking at RIVERCITY-DC12, I can see DNS is working, I've been able to query it from other machines in my domain, and no errors are reported in the DNS category within the Event Viewer. When I checked the FMSO roles, it shows RIVERCITY-DC12 is the machine for all listed roles. Not sure what I should do next or how to troubleshoot/investigate after searching around for a solution... ideas? Environment: Domain: rivercity (rivercity.local) Forest functional level: Windows 2000 (I'm more than happy to raise this) Windows Server 2008 All servers are Windows Server 2008 R2 SP1 (fully patched)

    Read the article

  • vmware player won't run on CentOS due to missing /dev/vmmon, what could be the problem?

    - by Graphics Noob
    So I've tried installing vmware player 3.1.4 and 3.1.3 and both times had the same problem, when I try to load a VM I get the error "Could not open /dev/vmmon". When I ls /dev/ I can see there is no "vmmon" device present. When I try running: sudo /etc/init.d/vmware start I get the output: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] which shows that the Virtual Machine Monitor fails to load. I tried following the advice on this site and ran vmware-modconfig --console --install-all I notice during the compilation there are no errors, but at the end I get the message: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [ OK ] Unable to start services Out of curiousity I tried: sudo /sbin/insmod /lib/modules/2.6.18-238.9.1.el5xen/misc/vmmod.ko But got the error message: insmod: error inserting 'vmmon.ko': -1 Invalid module format I have a feeling this may be the root of the problem, but I don't know what could be causing it or how to fix it.

    Read the article

  • This web site needs a different Google Maps API key. A new key can be generated at http://code.googl

    - by MJI
    Apologies in advance if this is the wrong place to post. I tried searching for this issue and all that seemed to come up were questions posted from people who had this issue with their web pages. I couldn't find questions related to this issue from a laypersons perspective. I'm not a developer. I have no domain, nor wish to have one at the time. Rather I'm just a regular person who likes to upload photos to some photo related sites. My uploading process constantly gets interrupted by one of these annoying API errors. I get it at least two times, one when I click the page to upload, and also right after it has uploaded. It also pops up if I go to edit a photo or delete it. This interrupts my browsing experience until I click okay. I just want a fix for the annoying without having to register for a key. I tried before and it required a web domain. I rather not have to create a domain and go through such hoops just to fix this. Is there a solution for this problem that doesn't require registration? Another thing to note: I have used two computers. One has the message pop-up and the other doesn't. What is different about the two computers?

    Read the article

  • csvde doesn't import users

    - by The Eighth Ero
    I have a small problem as I'm a server manager beginner, I installed a Domain Controller on my Windows Server 2008, and I created three OUs, now I'm trying to add users to each OU via csvde command, but I get as a result of the operation, without mentioning any errors: > C:\csvde>csvde -i -f List.csv > Connecting to "(null)" > Logging in as current user using SSPI Importing directory from file > "List.csv" Loading entries. > 0 entries modified successfully. Below is the csv file I'm using to add 2 users to "Offshoring1" OU, the domain name is "iado.lan". DN objectClass sAMAccountName sn givenName userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan user BB NN BB [email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan user II YY II [email protected] and this the csv data as generated by Word 2011 on my mac : DN;objectClass;sAMAccountName;sn;givenName;userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan;user;BB;NN;BB;[email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan;user;II;YY;II;[email protected] I do use -k option to force import but still no success.

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Pages load in brower fine, but 404 not found reported for the page during the GET on all pages except index

    - by user885983
    I believe this question is more suited to serverfault (please correct me if not). This issue appears very similar to this question (except there are no 301 Moved Permanently for any pages). The domain is yorkshirebadges.co.uk. For example, loading yorkshirebadges.co.uk or yorkshirebadges.co.uk/index.php reports no 404s during network inspection. But every other page (/contact.php, /products.php) report a not found. Mod_rewrite is being used on the site, I checked this out but didn't see any obvious errors. It's included below for reference: RewriteEngine on RewriteRule ^store/material/([^/\.]+)/price/?([^/\.]+)?$ products.php?prodType=$1&price=$2 RewriteRule ^store/price/?([^/\.]+)?$ products.php?price=$1; RewriteRule ^store/material/?([^/\.]+)?$ products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?$ products.php?prodCat=$1 RewriteRule ^store/([^/\.]+)/price/([^/\.]+)$ products.php?prodCat=$1&price=$2 RewriteRule ^store/Type/?([^/\.]+) products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?([^/\.]+)?$ view-product-details.php?cat=$1&prodName=$2 RewriteRule ^store/([^/\.]+)/material/?([^/\.]+)?$ products.php?prodCat=$1&prodType=$2 RewriteRule analytics http://www.google.com/analytics <IfModule mod_suphp.c> suPHP_ConfigPath /home/yorkshir <Files php.ini> order allow,deny deny from all </Files> </IfModule> Chrome Network Inspection (and firebug on firefox) report 404s on all pages except the index, the server is apache2. Really scratching my head on this one!

    Read the article

  • Active Directory problems while trying to perfom compare operation

    - by Alex
    I have CentOs 5.5 with Apache 2.2 and SVN installed. Also I have Windows 2003 R2 with Active Directory. I'm trying to authorize users via AD so each user have access to repo if he is a member of corespondent group in AD. Here is my apache config: LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so LDAPVerifyServerCert off ServerName svn.mydomain.com DocumentRoot /var/www/svn.mydomain.com/htdocs RewriteEngine On [Location /] AuthType basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPURL ldaps://comp1.mydomain.com:636/DC=mydomain,DC=com?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN [email protected] AuthLDAPBindPassword binduserpassword [/Location] [Location /repos/test] DAV svn SVNPath /var/svn/repos/test AuthName "SVN repository for test" Require ldap-group CN=test,CN=ProjectGroups,DC=mydomain,DC=com [/Location] When I'm using "Require valid-user" everything goes fine, "Require ldap-user" also works. But as soon as I use "Require ldap-group" authorization fails. Trere are no errors in apache logs, but Active Directory shows folowing error: Event Type: Information Event Source: NTDS LDAP Event Category: LDAP Interface Event ID: 1138 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal event: Function ldap_compare entered. Event Type: Error Event Source: NTDS General Event Category: Internal Processing Event ID: 1481 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal error: The operation on the object failed. Additional Data Error value: 2 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'DC=mydomain,DC=com' I'm confused by this problem. What I'm doing wrong?

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • MRTG Monitoring of Disk

    - by Antitribu
    I am trying to monitor Disk usage over SNMP using MRTG on CentOS5.2. I'm open to any suggestions as to the best way to achieve this (I would also like to do other metrics like CPU). Please don't assume I know anything about MRTG. I am using the following config: LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/TCP-MIB.txt workdir: /var/www/html/mrtg/temp/ # # Disk Usage Monitoring # Target[servername.]: dskPercent.0&dskPercent.0:[email protected] Title[servername.]: / on servername routers.cgi*Desc[servername.]: / on servername routers.cgi*ShortDesc[servername.]: / MaxBytes[servername.]: 100 AbsMax[servername.]: 100 Options[servername.]: growright,nopercent,gauge YLegend[servername.]: used disk space ShortLegend[servername.]: % used Legend1[servername.]: usage Legend2[servername.]: usage Legend3[servername.]: peak usage Legend4[servername.]: peak usage LegendI[servername.]: usage LegendO[servername.]: usage routers.cgi*Icon[servername.]: disk-sm.gif routers.cgi*Options[servername.]: noo,nomax,noabsmax Unscaled[servername.]: dwmy I receive the errors: Unknown SNMP var dskPercent.0 at /usr/bin/mrtg line 2035 Unknown SNMP var dskPercent.0 at /usr/bin/mrtg line 2035 From forum surfing etc the suggestion is to use the fully qualified OIDs, I'd like to avoid this (for readability). So essentially I'm wondering where can I find a mib file compatible with mrtg for it's reference or a working config file.

    Read the article

  • Using IIS6 to run kill process. Executable hangs

    - by David
    I'm using the following code (any tried many variations) in a web page that is supposed to kill a process on the server: Process scriptProc = new Process(); SecureString password = new SecureString(); password.AppendChar('p'); password.AppendChar('s'); password.AppendChar('s'); password.AppendChar('w'); password.AppendChar('d'); scriptProc.StartInfo.UserName = "mylocaluser"; scriptProc.StartInfo.Password = password; scriptProc.StartInfo.FileName = @"C:\WINDOWS\System32\WScript.exe"; scriptProc.StartInfo.Arguments = @"c:\windows\system32\killMyApp.vbs"; scriptProc.StartInfo.UseShellExecute = false; scriptProc.Start(); scriptProc.WaitForExit(); scriptProc.Close(); The VBS file is supposed to kill a w3wp.exe process, but never works. There are no errors in the application log. It works locally. I noticed WScript.exe is in task manager every time I run the page, and never goes away. The process WScript.exe (and I tried others such a psexec.exe) is being run as a local user with admin rights (and I tried other types of users including domain admins) when run from IIS, but it works when run from the command line on the server.

    Read the article

  • Outlook / Gmail 'too many simultaneous connections' error

    - by sam
    I'm just setting up Outlook for Mac, and I'm trying to add a Google Apps application for business email (Gmail). I've set it up correctly (same details worked in Mac mail). But I keep getting two errors, either or just a error asking for the username and password again. Just to confirm the user name and password are correct, although when I go into menu command Tools - Account and look in the password field for that account it's blank. But if I just click cancel on the popup asking for my username password it just continues to get mail in the background for about 30 seconds, before again asking again for the password, or showing the above error which I can click 'yes' to and again it will get the mail. But after 30 seconds it does the same thing. I've got two other accounts set up fine, one a horde account (hosted webmail using POP3) and the other a iCloud .me account running on IMAP. What might be causing this and how I can remedy it? A bit more background: the machine is a MacBook Pro running Mac OS X v10.7 (Lion). Update 2013-11-02 I've updated Outlook to SP3, but I still get the same error.

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

  • Looking for way to log process terminations on OS X (Mac)

    - by Stan Sieler
    I'm looking for a way to log all process terminations on my Mac (OS X 10.6.8). (And see pid, timestamp, process name) I've implemented something similar for HP-UX, but it required a kernel-level driver and intercepting several variations of "exit()" (the normal one, and the one invoked on behalf of a process while it's aborting). Why do I want the info? I've been seeing messages in my system log file (dmesg) like: CODE SIGNING: cs_invalid_page(0x1000): p=91550[GoogleSoftwareUp] clearing CS_VALID CODE SIGNING: cs_invalid_page(0x1000): p=92088[GoogleSoftwareUp] clearing CS_VALID Although dmesg lacks timestamps, apps/Utilities/Console : Database : all : search for CS_VALID shows that the messages appears about once every 58 1/2 minutes. I suspect the number after "p=" is a process id (pid) ... but for a process that has long since terminated by the time I see the message. So, if there was a process termination log mechanism that recorded the pid, the time of termination, the reason for termination, and the process name (at time of termination), that would probably allow me to determine who's causing those errors to be logged! (No, I'm not running Chrome on my Mac, and "ps -ef | grep -i goog" gets no hits either ... I'm not consciously running any Google apps on the Mac) thanks, Stan [email protected]

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

  • Help trying to figure out why IIS7 is crashing / locking up / denying connections

    - by Pure.Krome
    Hi folks, I've got a pretty busy website that is running on a single web front end machine, on W2K8 + IIS7. Every now and then - eg. maybe monday at 3am or something, then a few days later .. some early morning time .. then nothing for 2 weeks ... etc - the website fails to respond to any client connections. ie. no one can connect to the website. I can remote desktop to the machine, etc no probs. I restart the app pool (the website is running in intergrated mode), still nothing. I try and get a crash dump of the process (it's around 600 mb maybe even more) ... that fails after about a min of trying (and i have plenty of HD space). The only way to fix this issue, is to manually stop the www service and then start it again. The stopping takes a while (a minute?) while starting is nearly instant. I'm at a loss to figure out what part of my code is causing this. At first, I thought it might be a stack overflow because of some error that might be going to the error page, which in turn errors .. rinse repeat boom. But i've had a look at the error page and it feels ok. So, I'm hoping someone might be able to help and say how I can correctly get a proper dump of the IIS process so i can then do some more autopsy on it. I would email Tess Ferrandez (the goddess of crash debuging) but I thought I'd try here before I spam her. Can anyone have any suggestions to how I can figure out how to start to debug this issue?

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • Windows 7 Install: No drives were found

    - by Albert Bori
    I was building a computer for my wife with an older SATA hard drive that I had lying around, and when attempting to do a new install of Windows 7 on it, the installer says: "No drives were found. Click Load Driver to provide a mass storage driver for installation." I ran the diskpart command: list volume, and it showed up as "Raw". So, I formatted it to NTFS and then it showed up as a healthy drive in diskpart. I also ran check disk on it with no errors. Windows 7 installer STILL can't find the drive. As far as BIOS settings, I have tried "Native IDE", AHCI, and Both AHCI/IDE mode (SATA slots 0-2 AHCI, 3-4 IDE). I tried all combinations... still "no drives were found". At this point, I'm just scratching my head. Using the installation dos window, I can see and talk to the drive just fine, but the installer just doesn't see it at all. I've even written folders and files to the drive, and it still "can't be seen". Any help would be great. Items of interest: Motherboard model: Gigabyte GA-A75M-UD2H - BIOS Version F5 (latest) Hard drive model: 80GB Seagate Barracuda 7200.7 ST380817AS (no other drives) Installing Windows 7 using a FAT32 formatted USB Drive, which I've used for other installs

    Read the article

  • Multiple logins with pam_mount means multiple (redundant) mounts ...

    - by Jamie
    I've configured pam_mount.so to automagically mount a cifs share when users login; the problem is if a user logs into multiple times simultaneously, the mount command is repeated multiple times. This so far isn't a problem but it's messy when you look at the output of a mount command. # mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) I'm assuming I need to fiddle with either the pam.d/common-auth file or pam_mount.conf.xml to accomplish this. How can I instruct pam_mount.so to avoid duplicate mountings?

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >