Search Results

Search found 3942 results on 158 pages for 'logged'.

Page 113/158 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Hyper-V virtual machine can't be migrated to a specific host in the cluster

    - by Massimo
    I have a three-node Hyper-V cluster running on Windows Server 2008 R2 which is working quite flawlessly: there are no errors, live migration works, all hosts can and will happily run all virtual machines, and so on. But one specific virtual machinee is trying to make me go mad: it works on two nodes of the cluster, but not on the third one. Whenever I try to move the VM to that node, be it in a live migration or with the VM powered off, it always fails. In the event log of the host these events are logged: Source: Hyper-V-VMMS Event ID: 16300 Cannot load a virtual machine configuration: General access denied error (0x80070005) (Virtual machine ID <GUID>) Source: Hyper-V-VMMS Evend ID: 20100 The Virtual Machine Management Service failed to register the configuration for the virtual machine '<GUID>' at 'C:\ClusterStorage\<PATH>\<VM>': General access denied error (0x80070005) Source: Hyper-V-High-Availability Event ID: 21102 'Virtual Machine Configuration <VM>' failed to register the virtual machine with the virtual machine management service. All other VMs can be moved to/from the offending host, and the offending VM can be moved between the other two hosts. Also, this is not a storage problem, because there are other VMs in the same cluster volume, and the host has no troubles running them. What's going on here?

    Read the article

  • Can't start Bind9 on Ubuntu 10.04 + Plesk 10.1 - "named: no process found"

    - by bradley.ayers
    I've installed a fresh version of Ubuntu 10.04 64bit, I didn't install bind when choosing what packages should be installed in the Ubuntu installer. I downloaded the auto installer for Plesk 10.1 and installed it successfully. When I logged into the Plesk control panel and tried to change the password, it failed because it couldn't restart bind. I SSH'd into the box and tried a sudo /etc/init.d/bind9 restart and get the following: brad@ws01:/root# sudo /etc/init.d/bind9 restart * Stopping domain name service... bind9 WARNING: key file (/etc/bind/rndc.key) exists, but using default configuration file (/etc/bind/rndc.conf) rndc: connect failed: 127.0.0.1#953: connection refused named: no process found [ OK ] * Starting domain name service... bind9 [fail] Looking at tail /var/log/messages reveals a whole bunch of: Feb 23 16:08:21 ws01 kernel: [ 3840.065851] type=1503 audit(1298441301.831:31): operation="open" pid=5565 parent=5563 profile="/usr/sbin/named" requested_mask="::r" denied_mask="::r" fsuid=108 ouid=0 name="/var/named/run-root/etc/named.conf" Edit: After following ooshro's advice, bind runs, however I still get the named: no process found error: brad@ws01:/etc/apparmor.d$ sudo /etc/init.d/bind9 restart * Stopping domain name service... bind9 WARNING: key file (/etc/bind/rndc.key) exists, but using default configuration file (/etc/bind/rndc.conf) named: no process found [ OK ] * Starting domain name service... bind9 [ OK ]

    Read the article

  • How to remove strict RSA key checking in SSH and what's the problem here?

    - by setatakahashi
    I have a Linux server that whenever I connect it shows me the message that changed the SSH host key: $ ssh root@host1 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 93:a2:1b:1c:5f:3e:68:47:bf:79:56:52:f0:ec:03:6b. Please contact your system administrator. Add correct host key in /home/emerson/.ssh/known_hosts to get rid of this message. Offending key in /home/emerson/.ssh/known_hosts:377 RSA host key for host1 has changed and you have requested strict checking. Host key verification failed. It keeps me for a very few seconds logged in and then it closes the connection. host1:~/.ssh # Read from remote host host1: Connection reset by peer Connection to host1 closed. Does anyone know what's happening and what I could do to solve this problem?

    Read the article

  • Monitor Exchange Email Address and run scripts

    - by WernerCD
    Okay... Not sure how "out there" this thought is... Right now to send a pager message (aka text message), a user logs into our AS400... logs into the program... enters user name and message and hit's F10 to send. With a little looking, it seems that you can run remote commands to the AS400 via FTP. So I'm working on building a script (batch or otherwise) that, given two parameters (user, message), will FTP into the AS400 and run a remote command: c:\>ftp server user: admin password: ***** ftp> quote rcmd SNDPGRMSG TOPGR(JDOE) MSG('This is a Test') ftp> quit So... what I want to do is setup an email account on our Exchange server Monitor the account for incoming mail upon receipt of incoming mail, parse it... say for example subject is defined as "Recipient" and email text is defined as "Pager message" run a batch that uses the above mentioned TOPGR and MSG as parameters... via FTP to the AS400 mark email as "read" The main thing I'm not sure about is monitoring an exchange account and running a script on incoming emails. I'm sure what I want to do is possible... but where would I start? EDIT: Clarification The main reasons for using this four part system are logging (messages sent via this are logged and reported by the AS400 program) and the existing scheduler for redirecting pages (For example, the weekly on-call person = TOPGR(oncall) gets updated by the AS400 program). I'm also trying to remove duplicate work. If I can get this setup working, I can redirect pages from OTHER systems into this one. I then don't have to update 2, soon to be 3, systems with current phone numbers, carriers, on-call schedules, etc. System #2 and #3 can just "email" [email protected].

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • XDEBUG/PHP doesn't dump profile even when set up properly?

    - by John D.
    I installed xdebug from source, but also tried my package manager (separately) and they both are loaded correctly (verified by restarting Apache and seeing the xdebug copyright info in phpinfo()) but they do not dump profiling information. Out of the 40 different attempts of configuration it logged once or twice but I lost what I did, I tried with first only loading the module in php.ini with no settings, but it didn't log to /tmp/. I tried many different settings but my current is now: xdebug.profiler_enable = Off xdebug.profiler_enable_trigger = 1 xdebug.profiler_output_dir = "/tmp/" xdebug.profiler_output_name = "profiler.%t" Of course I call my script through 127.0.0.1/test.php?XDEBUG_PROFILE, which is for enable_trigger. Do you know why it would not dump profiler information? nobody (Arch Linux) can write to /tmp/ as it has before, so I'm sure it is not a permissions error. Apache's error_log does not tell me anything about xdebug either, as it has loaded correctly. It just does not "work"! EDIT: I made a subfolder "xdebug_profiles" in /tmp/ and chown'ed it to nobody, and now it works flawlessly. I'm not sure why it couldn't write before, I guess it's just a caveat with nobody on Arch. I answered my own question , not enough points to answer it or comment, so consider this answered.

    Read the article

  • Newbie: get access privilege

    - by Mellon
    I am newbie on Linux Ubuntu machine. I logged in to the Ubuntu with username: student. There are some directories only allowed root user to access, for example /var/lib/mysql ,(I know I can use sudo to access but it is not what I want). If I want to get the access privilege on those directories with student account, is it so that I can run the following command : chown student: PATH_TO_ROOT_USER_PRIVILEGED_DIR and after that, I can access that directory by using my own account ? am I right? If I am right, then will root user lose the access privilege because I changed it to student user? If I am wrong, please tell me the right solution. P.S. please don't concern on what I am going to do on /var/lib/mysql directory, that is only my example, as I mentioned above, I mean generally *for those directories which only have root privilege*, can I use chown to change access privilege and will root user then loose the access because of the change made by chown ? I just wanna know the effect of chown.

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • Firefox Won't Save My Google Cookies Between Restarts

    - by Tom Purl
    I'm using firefox 11 on Ubuntu. For some strange reason, Firefox won't save my google cookies between browser restarts. I have to log in to gmail every time I restart my browser, even if I click on the check box that tells Google to remember me. The strange thing is that Firefox does actually store some gmail cookies when I log in. It's just that those cookies disappear after restarting Firefox. The especially strange thing is that this only seems to happen with *.google.com url's. I haven't noticed this problem with any other site that I use. Please note that I tried to see if this was a plugin-related problem. I therefore started Firefox in safe mode and turned off all plugins. I then logged into Gmail and told it to remember me. I then shut down Firefox and started it the same way in safe mode. I got the same bad results. Has anyone else ever seen anything like this before? Is there a reason that Firefox seems to be blacklisting Google cookies?

    Read the article

  • BOINC error code -1200 when opening...

    - by Erik Vold
    I installed boinc 6.10.21 on my macosx 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use boinc, and I said 'yes' to this. Then when I tried to open boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Apache Virtual host not recognized

    - by Bozho
    I've been using one server, then I reinstalled everything on another server, and the mod_jk stopped working. Here is the situation: apache 2.0 sitting "in front" mod_jk used to connect to the apache to tomcat tomcat 6.0.26 used to server the actual requests I followed this tutorial. The result is: accessing http://mysite.com opens the index.html in /var/www/ accessing http://mysite.com:8080/ works OK the logs at /var/logs/apache2 show everything is OK: [Mon Mar 29 22:01:53.310 2010] [28349:3075389184] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 22:01:53 2010] [warn] No JkShmFile defined in httpd.conf. Using default /var/log/apache2/jk-runtime-status [Mon Mar 29 22:01:53 2010] [notice] Apache/2.2.9 (Debian) mod_jk/1.2.26 configured -- resuming normal operations I compared the server.xml, jk.conf, sites-enabled/mysite from the new server to those from the old one and they are identical. The domain name is the same (I updated the DNS record today, and it has refreshed successfully) So the question is, what can go wrong? Is there another place where problems would be logged, if such occur? Update What I can be almost certain of is that the virtual host is not recognized. It is always forwarded to the default virtual host. So, how to make sure the virtual host is recognized and working?

    Read the article

  • GTX 280 purple snow on bootup, card works without drivers

    - by Brokar
    i have owned a ASUS GTX280 for 3 years now. The card has been great all along but i started having problems 10 days ago. I was playing Diablo 3 for 1 week on max settings no problems, then suddenly my display kept getting some weird purple/colours as soon as i booted and logged into windows. Went into safe mode, updated drivers and it kept crashing. Formatted PC, fresh windows install with new WHQL drivers again same problem. Uninstalled nvidia drivers and pc has been running great for 4 days now, ofcourse i cannot run games but everything works on 1680x1050 resolution and i can browse internet,watch movies and use my PC for everything but gaming. As soon as i install nvidia drivers PC won't boot. I only wanna game a few hours a week (very busy program with school this month so it might be a blessing that i cannot game) and i would love it if i could keep the card. I am looking to upgrde later on when i will have time for gaming but i wonder if i could still use the card somehow with different/new drivers (tried older drivers that came with the card on a CD aswell) tldr: PC works fine with no nvidia drivers (apart from gaming ofc). Once i install WHQL drivers or older ones, cannot even log into windows. Fix?

    Read the article

  • Editing a windows XP installation's registry without being able to log in.

    - by Alain
    I've got a windows XP installation that has a corrupt registry. A worm (which was removed) had hijacked the HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon entry (which should have a value of Userinit=C:\windows\system32\userinit.exe When the worm was removed, the corrupt entry was deleted entirely, and now the system automatically logs off immediately after attempting to log in. Regardless of the user and boot mode, no accounts can be logged in to. The only thing required to correct this behavior is to restore the registry key, but I cannot come up with any ways of editing the registry without logging in to an account. I tried remotely connecting to the registry but the required services aren't enabled on the machine. I tried booting on the same machine using the BartPE boot CD but I could not find any way of editing the registry on the C:\Windows installation - running regedit only modifies the X:\I386\ registry in memory. So, what can I use modify the registry of an un-login-able Windows XP instance so that I can log in again? Thanks guys. EDIT: The fix worked. The solution to the auto-logoff problem was, as hoped, to simply add the value mentioned above to the appropriate registry entry. This can be done using the BartPE Boot CD, as described in the accepted answer below, but I used the Offline NT Registry Editor software mentioned in another answer. The steps were: Boot from the NT Registry Editor CD Follow the directions until the appropriate boot sector is loaded. Instead of using one of the default options for modifying passwords or user accounts, type "software" to edit that hive. Type '9' to enter the command line based registry editor. Type "cd Microsoft" (enter) "cd Windows NT" (enter) "cd CurrentVersion" (enter) "cd Winlogon" (enter) Type "nv 1 Userinit" to create a new value under the Winlogon key Type "ev Userinit" to edit the new value, and when prompted, type "C:\windows\system32\userinit.exe" (enter) Type 'q' to quit the registry editor, and as you back out of the system, follow directions to write the hive back to disk. Restart your computer and log in - problem solved. (generic 'warning: back up your registry' disclaimer)

    Read the article

  • How do I renew a Web Server certificate in Windows Server 2008?

    - by Mark Seemann
    The SSL certificate for my web site just expired a few days ago, and I would like to renew it. I originally issued it two years ago using my Windows 2008 Certificate Authority, and it's worked without a hitch in all that time, so I would like to renew the certificate as simply as possible to make sure that all the applications relying on that certificate continue to work. I can open an MMC instance and add the Certificates snap-in for the Local Computer. I can find the relevant certificate under Personal, but I can't renew it. When I select Renew certificate with new key I get the following message: Web Server Status: Unavailable The permissions on the certificate template do not allow the current user to enroll for this type of certificate. You do not have permission to request this type of certificate. However, I can't understand this, as I'm logged on as a Domain Admin and I'm running the MMC instance in elevated mode. I've checked the Web Server certificate template, and Domain Admins have the Enroll permission on this template. FWIW, I also tried rebooting the server. How can I renew the certificate?

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • ESXi 4.0 - cannot copy files

    - by user21368
    I am unable to copy files or make directories on my installation of VMWare ESXi 4.0. I have done so in the past (copied an iso onto a datastore). But something has changed and I have no idea what. I cannot copy using the datastore browser (get a dialog saying "Expected a PUT_FILE_DONE message. Got SESSION_COMPLETE"). I cannot create a directory through datastore browser (get a dialog saying "Cannot complete file creation operation"). When I ssh to the ESXi server I cannot create files or folders under /vmfs/volumes. But I can manipulate files elswhere (including /vmfs). Here are the permissions for the directories (I am logged in as root). ~ # ls -lh /vmfs/volumes/ drwxr-xr-t 1 root root 1.2k Sep 3 12:19 4a76f260-36b7eb85-c3b3-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 4a76f261-d6190a9e-3b89-0024e8314929 drwxr-xr-t 1 root root 1.4k Sep 22 10:38 4a76f262-4ac21f0a-6bc1-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor1 - c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor2 - bbf1477b-4aec1d8c-caa5-5e8720bebd85 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor3 - efd8efe3-03bc1cbf-15e0-080efd9e7379 drwxr-xr-x 1 root root 8 Jan 1 1970 bbf1477b-4aec1d8c-caa5-5e8720bebd85 drwxr-xr-x 1 root root 8 Jan 1 1970 c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 datastore1 - 4a76f260-36b7eb85-c3b3-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 datastore2 - 4a76f262-4ac21f0a-6bc1-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 efd8efe3-03bc1cbf-15e0-080efd9e7379 ~ # touch /vmfs/foo.txt ~ # touch /vmfs/volumes/foo.txt touch: /vmfs/volumes/foo.txt: Operation not permitted I've googled and found nothing helpful. Does anyone out there have an idea as to what is going on? Thanks in Advance. Pete.

    Read the article

  • auth.log is empty (Ubuntu)

    - by Vinicius Pinto
    The /var/log/auth.log file in my Ubuntu 9.04 is empty. syslogd is running and /etc/syslog.conf content is as follows. Any help? Thanks. # /etc/syslog.conf Configuration file for syslogd. # # For more information see syslog.conf(5) # manpage. # # First some standard logfiles. Log by facility. # auth,authpriv.* /var/log/auth.log *.*;auth,authpriv.none -/var/log/syslog #cron.* /var/log/cron.log daemon.* -/var/log/daemon.log kern.* -/var/log/kern.log lpr.* -/var/log/lpr.log mail.* -/var/log/mail.log user.* -/var/log/user.log # # Logging for the mail system. Split it up so that # it is easy to write scripts to parse these files. # mail.info -/var/log/mail.info mail.warning -/var/log/mail.warn mail.err /var/log/mail.err # Logging for INN news system # news.crit /var/log/news/news.crit news.err /var/log/news/news.err news.notice -/var/log/news/news.notice # # Some `catch-all' logfiles. # *.=debug;\ auth,authpriv.none;\ news.none;mail.none -/var/log/debug *.=info;*.=notice;*.=warning;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages # # Emergencies are sent to everybody logged in. # *.emerg * # # I like to have messages displayed on the console, but only on a virtual # console I usually leave idle. # #daemon,mail.*;\ # news.=crit;news.=err;news.=notice;\ # *.=debug;*.=info;\ # *.=notice;*.=warning /dev/tty8 # The named pipe /dev/xconsole is for the `xconsole' utility. To use it, # you must invoke `xconsole' with the `-file' option: # # $ xconsole -file /dev/xconsole [...] # # NOTE: adjust the list below, or you'll go crazy if you have a reasonably # busy site.. # daemon.*;mail.*;\ news.err;\ *.=debug;*.=info;\ *.=notice;*.=warning |/dev/xconsole

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • The email address used to help trust my PC, is no longer valid, how do I fix that?

    - by Rod
    A few nights ago I upgraded my Windows 7 PC to Windows 8. This morning, when I logged into my PC, I noticed the little flag in the system tray, in desktop mode, had a red "X" on it. I checked it and one of the issues was my needing to tell Windows 8 that I trusted my PC. I clicked on the link to trust my PC, and saw the button to tell it to trust my PC, and so I did. After doing that it send it had sent out an email to a really old email address I had, which hasn't been valid for many years. Now what? Will that "trusting my PC" be invalid, because I can't respond to it (which certainly isn't true, if it's sending messages to so old email address). I didn't even know that Windows still had such an old email address. I'm concerned that I won't have a trust relationship with my own PC, that somehow or other whatever holds onto my information has old information and that I am not sure how to change it in as fast a manner as possible so that I can trust my own PC. How do I do these things?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >