Search Results

Search found 21356 results on 855 pages for 'check digit'.

Page 624/855 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Everything on hard drive suddenly vanished without explanation, but the drive seems otherwise functional

    - by user160705
    Windows 7 Ultimate x64 Custom-built desktop I have a new desktop that I built a few months ago that has a four-year-old WD hard drive and a two-year-old drive. I had set it up so that the newer drive had Windows and most of my files on it while the older drive had my music library, some movies and games, and a backup of all of my documents. About a month ago, I installed some new case fans and, in the process, I temporarily unplugged my hard drive (while the computer was off of course - I took all the necessary precautions) for wire management. I plugged it back in, and didn't really think anything of it. At around that time, however, I noticed that my older hard drive wasn't showing up in Windows Explorer anymore but I didn't really have time to check into it (I had just started college) and I'm finally getting a chance to now. That drive doesn't show up in Windows Explorer at all but it does show up in Disk Management. That screen shows the following: http://puu.sh/17mMN Any idea what happened? Is there any way to recover my files? Thanks in advance for your help! EDIT: The music and games and stuff used to be on "Disc 1", the 465.71 GB of what is now showing as unallocated space.

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Remote SQL server connection failure

    - by Sevki
    I am trying to connect to my MSSQL server 2008 web instance and im failing horribly... i get the error 26 and before you jump on me i have done these Check the spelling of the SQL Server instance name that is specified in the connection string. Use the SQL Server Surface Area Configuration tool to enable SQL Server to accept remote connections over the TCP or named pipes protocols. For more information about the SQL Server Surface Area Configuration Tool, see Surface Area Configuration for Services and Connections. Make sure that you have configured the firewall on the server instance of SQL Server to open ports for SQL Server and the SQL Server Browser port (UDP 1434). Make sure that the SQL Server Browser service is started on the server. in addition to theese i have disabled the firewall completely and tried other ports nothing works the same credentials work on the server but not on the client. this is the exact error message A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) Can anybody help?

    Read the article

  • RPCSS kerberos issues on imaged Windows workstations

    - by sysadmin1138
    While doing some unrelated troubleshooting I came across a set of Event Log entries that have me concerned. Machine Name: labcomputer82 Source: Security-Kerberos Event ID: 4 Event Description: The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server labcomputer143$. The target name used was RPCSS/imagemaster4.ad.domain.edu. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Please ensure that the target SPN is registered on, and only registered on, the account used by the server. This error can also happen when the target service is using a different password for the target service account than what the Kerberos Key Distribution Center (KDC) has for the target service account. Please ensure that the service on the server and the KDC are both updated to use the current password. If the server name is not fully qualified, and the target domain (AD.DOMAIN.EDU) is different from the client domain (AD.DOMAIN.EDU), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server. There are three machine names used in this message. It's generated on labcomputer82, it's attempting to talk to another lab workstation called labcomputer143, and the service in question (RPCSS) refers to the name of the machine that this machine was imaged from (and possibly also that of labcomputer143, I'm not sure). The thing that has me raising both eyebrows is that the machine named labcomputer82 is attempting to use an SPN of RPCSS/imagemaster4.ad.domain.edu. The SPN attribute on the computer object in AD looks just fine. It has all the names it should have. Of the over 3,000 computer objects in our AD domain, somewhere around 1,700 of the are computer-lab seats that are frequently imaged. If we're doing something wrong, I'd like to know in time to get our procedures modified (and people retrained) for fall quarter. But if this is normal for imaged machines, I'll just continue ignoring these.

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Ubuntu: package installed, but files missing?

    - by jeckyll2hide
    I have been playing around with the /etc/asterisk directory, installing the related pacakge (asterisk-config), removing it, removing the directory manually (just playing around to get the configuration synced to my configuration repo). Now I just want to reinstall the official package, so I do: root@tethys:/etc# apt-get install asterisk-config root@tethys:/etc# tree asterisk/ asterisk/ +-- manager.d What?! Empty?!? Have I installed it? root@tethys:/etc# dpkg --get-selections | grep asterisk asterisk install asterisk-config install asterisk-core-sounds-en install asterisk-core-sounds-en-gsm install asterisk-modules install asterisk-moh-opsound-gsm install asterisk-voicemail install Indeed! Let me check the contents of the package: root@tethys:/etc# dpkg -L asterisk-config ... /etc /etc/asterisk /etc/asterisk/res_snmp.conf /etc/asterisk/dbsep.conf /etc/asterisk/cel_custom.conf /etc/asterisk/cel.conf /etc/asterisk/meetme.conf /etc/asterisk/jingle.conf /etc/asterisk/queuerules.conf ... So, what have I done that the package will get installed, but the contents are nowhere to be seen? And, more importantly, how can I force the contents to be installed, no matter what I have done before?

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • PPTP Client setup, Fedora 17

    - by Suarez Romina
    I am trying to connect to hidemyass.com VPN services via PPTP, but I am having issues understanding why it isn't working, since I don't get a warning or fatal error and my IP remains the same. This is how i create the connection: [root@lasvegas-nv-datacenter ~]# pptpsetup --create TUNNELNAME --server 199.58.165.20 --username MYUSERNAME --password MYPASSWORD --encrypt --start And this is the output: Using interface ppp0 Connect: ppp0 <-- /dev/pts/1 CHAP authentication succeeded MPPE 128-bit stateless compression enabled local IP address 10.200.21.14 remote IP address 10.200.20.1 After that, I check the log and this is what i get: [root@lasvegas-nv-datacenter ~]# tail -f /var/log/messages Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 1 'Start-Control-Connection-Request' Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:754]: Received Start Control Connection Reply Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:788]: Client connection established. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 7 'Outgoing-Call-Request' Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:873]: Received Outgoing Call Reply. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:912]: Outgoing call established (call ID 0, peer's call ID 20096). Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: CHAP authentication succeeded Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: MPPE 128-bit stateless compression enabled Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: local IP address 10.200.21.14 Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: remote IP address 10.200.20.1 Can someone help me? Basically, i Ieed to connect to the VPN and have my IP changed after the connection. I read a lot of guides but still cannot understand why I don't get a connection.

    Read the article

  • Building new computer, turns on, but no post

    - by addybojangles
    Pardon my ignorance here, finally decided to put together a computer and egads. I purchased a new motherboard, power supply, processor, video card and memory. ASUS M4A79XTD EVO AM3 AMD 790X ATX AMD Motherboard OCZ Fatal1ty OCZ550FTY 550W ATX12V v2.2 / EPS12V SLI Ready 80 PLUS Certified Modular Active PFC Power Supply AMD Phenom II X4 965 Black Edition Deneb 3.4GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Quad-Core Processor XFX HD-577A-ZNFC Radeon HD 5770 (Juniper XT) 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFireX Support Video Card G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ (originally had links for you guys, but I lack the rep, sorry!!) And I've got it all in the tower. I put in power supply, installed processor on motherboard, installed heatsink, put in ram, and I am using an older IDE hard disk. When I start the computer, the monitor tells me "check signal cable." As far as I can tell, the heatsink on the processor is spinning, the power supply is on (obviously), and the green LED on the motherboard is on. I originally only had the bigger output plugged in to the motherboard (what I saw in a YouTube vid as well as the mobo instructions), but after doing some research, it said plug in the other ATX power supply. Which I did. And trying to power the computer results in nothing. No beeps on startup, no post, anyone have any ideas? Your ideas and help is greatly appreciated.

    Read the article

  • VMWare Fusion: "No Permission to access this virtual machine"

    - by Craig Walker
    I had a VMWare Fusion VM backed up on my home network file server (Ubuntu). I wanted to run it again, so I copied it back to my Macbook. When I tried to launch it in VMWare, I got an error message: No permission to access this virtual machine. Configuration file: /Users/craig/WinXP Clean + Scanner.vmwarevm/WinXP Pro Test.vmx The permissions look fine to me: The bundle directory is 777 The bundle files (including the listed .vmx) are all 666 User is craig (my current user); group is staff. I changed the group to wheel at the suggestion of this page, but that didn't help. Finder shows read & write for craig, staff, and everyone on the bundle directory The bundle dir is also not locked Finder also shows rw and unlocked for the .vmx file The parent directory is also rw & unlocked Disk Utility permissions check doesn't show any problems with any of the associated files It sure looks like I should have wide open access to run this VM; why is Fusion complaining?

    Read the article

  • Accidently overwrote system.dbf - What now?

    - by Filip Ekberg
    I accidentally overwrote system.dbf in /usr/lib/oracle/xe/oradata/XE/system.dbf Well I did not actually do it accidentally, however I overwrote it because of other failures in the database. And when I try running the following: SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1258488 bytes Variable Size 92277768 bytes Database Buffers 192937984 bytes Redo Buffers 2932736 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open Now I want to try to Recover the database because starting it in mounted or standard surely doesn't work. SQL> recover database using backup controlfile; ORA-00283: recovery session canceled due to errors ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01122: database file 1 failed verification check ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01206: file is not part of this database - wrong database id How do I solve this? Is it even possible? My "real" problem was that I ran the /etc/init.d/oracle-xe configure and it overwrote my old configuration and probably removed passwords and such so my tables were gone, however I found the mytablespace.dbf so I hope that it is possible to recover? Please shed some light on this.

    Read the article

  • How to back up non-standard directories in my user profile with Windows Backup?

    - by James Johnston
    I'm using Windows Backup to back up my Win7 Pro laptop. I'd like to use it to back up my complete user profile, but I only see standard profile directories (e.g. C:\Users\JohnstonJ\Documents) in the list. Non-standard ones aren't there (e.g. C:\Users\JohnstonJ\MyCustomDirectory). What's the best way to handle this? The only thing I can think of is to browse under the "Computer" entry and navigate directly to C:\Users\JohnstonJ and check off the entire profile (to get what's in there, and any new directories that come up). But is that going to back up the profile twice? Cause other unforeseen problems given that I checked it off by navigating through the computer, rather than picking it under the "Data Files" category? (e.g. back up temporary file garbage, files in use problems, etc. that the "Data Files" category might be handling better). Looking for solutions that other people use that are known to work well and still uses the Windows Backup software - I don't really want to fuss with 3rd-party backup software. Example - as you can see, I have two directories in my profile that Windows Backup is not offering to back up: "Dropbox" and "New folder": (Link to images album because I don't have enough reputation to directly embed them: http://imgur.com/a/Xyv5u)

    Read the article

  • Crash and Centos

    - by Jackob
    Hello guys and girls, I've a big problem it's the second time that my server crashed. I tried to check every thing and seems correct except this /var/log/messages: May 25 20:16:11 srv1 kernel: swap_free: Bad swap offset entry 00300000 May 25 20:16:33 srv1 kernel: swap_free: Unused swap offset entry 00080000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00340000 May 25 20:16:33 srv1 kernel: swap_free: Unused swap offset entry 000c0000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00a00000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00200000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00890000 May 25 20:16:33 srv1 kernel: swap_free: Unused swap offset entry 00080000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00f00000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00f90000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00980000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00980000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00820000 May 25 20:16:33 srv1 kernel: swap_free: Unused swap offset entry 000d0000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00a60000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00a20000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 009a0000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00170000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00f20000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00b60000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00a30000 May 25 20:16:33 srv1 kernel: swap_free: Bad swap offset entry 00320000 May 25 20:16:47 srv1 kernel: swap_free: Bad swap offset entry 002c0000 May 25 20:16:47 srv1 kernel: Eeek! page_mapcount(page) went negative! (-1) What can be my problem? Thanks so much!

    Read the article

  • How to verify PostgreSQL 9 has installed correctly on a CentOS server?

    - by A4J
    I'm trying to install the PG (postgres) gem on a CentoOS server, but it keeps saying Postgres is too old, even though I have upgraded it to 9.1.3 (as per the instructions here http://www.davidghedini.com/pg/entry/install_postgresql_9_on_centos). I am using CentOS 5.8 (and Ruby 1.9.3) Here is the error message: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... no Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. psql --version confirms my version: psql (PostgreSQL) 9.1.3 I can confirm packages installed: Setting up Install Process Package postgresql91-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-devel-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-server-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-libs-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-contrib-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Nothing to do Any ideas on how to troubleshoot this? Thanks in advance.

    Read the article

  • Page reload needed several times before loading normally

    - by tim peterson
    Sorry my question is so vague I just have no idea where to start in solving it and am quite a novice with servers. Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only thing to my knowledge that I changed is that I installed mod_pagespeed https://developers.google.com/speed/pagespeed/mod. However, I have since turned it off by setting my pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. I'm wondering general advice on how to troubleshoot this problem. For instance are there linux commands to check page loading peformance? Also, it looks like I have lots of new error.logs in my /var/log/apache2 directory which i believe weren't there a few months ago. lots of this : error.log RewriteLog.log.24.gz ssl_access.log.40.gz error.log.1 RewriteLog.log.25.gz ssl_access.log.41.gz error.log.10.gz RewriteLog.log.26.gz ssl_access.log.42.gz error.log.11.gz RewriteLog.log.27.gz any thoughts? thank you, tim

    Read the article

  • Help diagnosing Likewise Open Active Directory authentication problem

    - by purpletonic
    I have two servers which were up until recently authenticating against the companies Active Directory Domain controller. I believe a recent change to the Active Directory administrator password caused the servers to stop authenticating against AD. I tried to add the servers back to the domain using the command: domainjoin-cli join example.com adusername this seemed to work without complaints, but when I try to login via ssh with my domain account, I get an invalid password error. When I run the command: lw-enum-users it prints all of the domain users, and looking up my own account, I see that it is valid and my password hasn't expired. I also ran lw-get-status and received the following: LSA Server Status: Agent version: 5.0.0 Uptime: 0 days 3 hours 35 minutes 46 seconds [Authentication provider: lsa-activedirectory-provider] Status: Online Mode: Un-provisioned Domain: example.com Forest: example.com Site: Default-First-Site-Name Online check interval: 300 seconds \[Trusted Domains: 1\] \[Domain: EXAMPLE\] DNS Domain: example.com Netbios name: EXAMPLE Forest name: example.com Trustee DNS name: Client site name: Default-First-Site-Name Domain SID: S-1-5-24-1081533780-4562211299-822531512 Domain GUID: 057f0239-7715-4711-e64b-eb5eeed20e65 Trust Flags: \[0x001d\] \[0x0001 - In forest\] \[0x0004 - Tree root\] \[0x0008 - Primary\] \[0x0010 - Native\] Trust type: Up Level Trust Attributes: \[0x0000\] Trust Direction: Primary Domain Trust Mode: In my forest Trust (MFT) Domain flags: \[0x0001\] \[0x0001 - Primary\] \[Domain Controller (DC) Information\] DC Name: dc1.example.com DC Address: 10.11.0.103 DC Site: Default-First-Site-Name DC Flags: \[0x000003fd\] DC Is PDC: yes DC is time server: yes DC has writeable DS: yes DC is Global Catalog: yes DC is running KDC: yes [Authentication provider: lsa-local-provider] Status: Online Mode: Local system Anyone got any ideas what might be occurring? Thanks in advance!

    Read the article

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

  • Mystery undeletable file

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf. it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • Unable to access Windows share

    - by mbnoimi
    I've installed Alfresco 4.2.d under Ubuntu 12.04 LTS; Everything done fine except I can't access it from Windows share although I got the link from Alfresco explorer which is: file:///%5C%5CECSA%5CAlfresco%5CSites%5Cswsdp%5CdocumentLibrary%5CAgency%20Files%5CImages%5Ccoins.JPG I tried to access it from: \\ECSA but I failed too so I made a ping (192.168.0.70 is server IP) then I got: C:\Users\user>ping 192.168.0.70 Pinging 192.168.0.70 with 32 bytes of data: Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.0.70: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\user>ping ECSA Ping request could not find host ECSA. Please check the name and try C:\Users\user> Some logs of what's going on: C:\Users\user>net view ECSA System error 1707 has occurred. The network address is invalid. C:\Users\user>nbtstat -a 192.168.0.70 Local Area Connection: Node IpAddress: [192.168.0.84] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- ECSA <20> UNIQUE Registered ECSA <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MAC Address = 00-00-00-00-00-00 C:\Users\user> CIFS Server Configuration in file-servers.properties ### CIFS Server Configuration - file-servers.properties ### cifs.enabled=true cifs.serverName=${localname}A cifs.domain= cifs.broadcast=255.255.255.255 cifs.bindto=192.168.0.70 cifs.ipv6.enabled=false cifs.hostannounce=true cifs.disableNIO=false cifs.disableNativeCode=false cifs.sessionTimeout=900 cifs.maximumVirtualCircuitsPerSession=16 cifs.tcpipSMB.port=445 cifs.netBIOSSMB.sessionPort=139 cifs.netBIOSSMB.namePort=137 cifs.netBIOSSMB.datagramPort=138 cifs.WINS.autoDetectEnabled=true cifs.WINS.primary=192.168.0.70 cifs.WINS.secondary=192.168.0.1 cifs.sessionDebug= cifs.pseudoFiles.enabled=true cifs.pseudoFiles.explorerURL.enabled=true cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url cifs.pseudoFiles.shareURL.enabled=false cifs.pseudoFiles.shareURL.fileName=__Share.url How can I fix this issue?

    Read the article

  • iTunes Home Sharing only works one way between 2 WinXP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >