Search Results

Search found 34595 results on 1384 pages for 'vendor id'.

Page 673/1384 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Ping: sendmsg: operation not permitted error after installing iptables on Arch GNU/Linux

    - by estol
    Yesterday I got a new computer as my homeserver, a HP Proliant Microserver. Installed Arch Linux on it, with kernel version 3.2.12. After installing iptables (1.4.12.2 - the current version afaik) and changing the net.ipv4.ip_forward key to 1, and enabling forwarding in the iptables configuration file (and rebooting), the system cannot use any of its network itnerfaces. Ping fails with Ping: sendmsg: operation not permitted If I remove iptables completely, networking is okay, but I need to share the Internet connection to the local network. eth0 - wan NIC integrated on the motherboard (no idea of vendor, probably HP). eth1 - lan NIC in a pci-express slot (Intel Gigabit CT Desktop http://www.intel.com/content/www/us/en/network-adapters/gigabit-network-adapters/gigabit-ct-desktop-adapter.html) Since it works without iptables(server can access the internet, and I can login with ssh from the internal network), I assume it has something to do with iptables. I do not have much experience with iptables, so I used these as reference (separate from each other of course...): wiki.archlinux.org/index.php/Simple_stateful_firewall#Setting_up_a_NAT_gateway revsys.com/writings/quicktips/nat.html howtoforge.com/nat_iptables On my previous server, I used the revsys guide to set up nat, worked like a charm. Anyone experienced anything like this before? What am I doing wrong? Thanks, estol

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Makecert.exe hangs

    - by Robert
    I was following the steps in Scott Hanselman's blog post describing how to create a certificate authority and code signing certificate for PowerShell scripts. Initially, I created the certificate authority and a personal certifcate and used it to sign a powershell script successfully. All went as described in the blog post. The problem starts (as most do) when I did something that was (probably) stupid, although it seemed reasonable at the time. I wanted to start over and repeat the process again with a clean slate, so from the mmc certificates snap-in console, I deleted the personal certificate and the certificate authority I created previously. After that any time I try to use makecert, (just as I did the first time around), makecert either hangs or faults (which prompts to end or debug). Did I hose something up by deleting via the certificates snap-in? It didn't complain or warn me that it could be potentially hazardous. Is this just coincidence and something else entirely could be hosed? I have Event Log entries from the times when makecert crashed, which all look very similar; here is one: Log Name: Application Source: Application Error Date: 8/5/2009 3:55:04 PM Event ID: 1000 Task Category: (100) Level: Error Description: Faulting application makecert.exe, version 6.0.6000.16384, time stamp 0x4545910b, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e03821, exception code 0xc0000005, fault offset 0x00067409, process id 0xe58, application start time 0x01ca160efdf30625. Anyone have any ideas as to what exactly caused this and/or what I can do to fix it. I'm on 32-bit Vista Enterprise w/SP2.

    Read the article

  • Windows Server 2008R2 Virtual Lab Activation strategies?

    - by William Hilsum
    I have a ESXi server that I use for testing, however, I am often needing to create additional Windows Server virtual machines. Typically, if I do not need a VM for more than 30 days, I simply do not activate. However, I have been doing a lot of HA/DRS testing recently and I have had a few servers up for more than this time. I have a MSDN account with Microsoft and have already received extra keys for Windows Server 2008 R2. I am doing nothing illegal and I am sure if I asked, they would issue more - but, I do not want to tempt fate! I have got 3 different "activated" windows snapshots I can get to at any time. If I try to clone these machines, I get the usual "did you copy or move them VM" message. If I choose copy, as far as I can see, it changes the BIOS ID and NIC MACs which is enough to disable activation. If I choose move, it keeps the activation fine (obviously, I know to change the NIC MAC - I believe I can leave the BIOS ID without problems). However, either of these options keeps the same SID code for the computer and user accounts. After the activation period has expired, as far as I can see, all that happens is optional updates do not work - it seems that the normal updates work fine. Based on this, as you can easily get in to Windows when not activated without any sort of workaround, I was wondering if it is ok just to leave a machine un activated? (However, I obviously would prefer if it was activated!) Alternatively, how dangerous is it run multiple machines on a non domain environment with the same SID? I am just interested to know if anyone can recommend a strategy for me? I have only found one solution that deals with bypassing activation - I am not interested in doing anything remotely dodgy... at a stretch, I am happy to rearm (I have never needed to keep a server past 100 days), but, I would rather have a proper strategy in place.

    Read the article

  • where is memory gone (no, not buffers or cache)

    - by Marki
    can anyone tell me where the memory is gone: (no, this time neither buffers nor cache) # free total used free shared buffers cached Mem: 3928200 3868560 59640 0 2888 92924 -/+ buffers/cache: 3772748 155452 Swap: 4192956 226352 3966604 top, sorted by memory, descending: top - 13:42:06 up 1 day, 3:47, 2 users, load average: 0.08, 0.12, 0.36 Tasks: 228 total, 1 running, 227 sleeping, 0 stopped, 0 zombie Cpu0 : 2.0%us, 4.0%sy, 0.0%ni, 90.1%id, 0.0%wa, 0.0%hi, 4.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni, 0.0%id,100.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3928200k total, 3868020k used, 60180k free, 2896k buffers Swap: 4192956k total, 226048k used, 3966908k free, 82068k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3863 root 20 0 902m 199m 3296 S 7 5.2 99:08.77 ndsd 21906 root 20 0 138m 9076 2988 S 0 0.2 0:00.02 sfcbd 2332 root 20 0 126m 4660 1332 S 0 0.1 0:17.72 mono 4243 wwwrun 20 0 683m 4468 668 S 0 0.1 0:07.38 java 2994 root 20 0 202m 2288 1660 S 0 0.1 6:10.02 httpstkd 4338 root 20 0 184m 2240 1112 S 0 0.1 0:00.52 namcd 21898 root 20 0 32368 1832 1256 R 1 0.0 0:00.08 top In fact, some time ago oom kicked in and crashed the system (kernel panic), and I'm afraid we're again not far from that point.... UPDATE # cat /proc/meminfo MemTotal: 3928200 kB MemFree: 51336 kB Buffers: 2964 kB Cached: 72876 kB SwapCached: 29128 kB Active: 233440 kB Inactive: 88040 kB Active(anon): 188920 kB Inactive(anon): 56752 kB Active(file): 44520 kB Inactive(file): 31288 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4192956 kB SwapFree: 3966824 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 225112 kB Mapped: 11356 kB Shmem: 32 kB Slab: 1624080 kB SReclaimable: 13740 kB SUnreclaim: 1610340 kB KernelStack: 4176 kB PageTables: 10500 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 6157056 kB Committed_AS: 2397684 kB VmallocTotal: 34359738367 kB VmallocUsed: 441372 kB VmallocChunk: 34359246755 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 4184064 kB slabtop Active / Total Objects (% used) : 9041019 / 9207548 (98.2%) Active / Total Slabs (% used) : 401132 / 401156 (100.0%) Active / Total Caches (% used) : 91 / 159 (57.2%) Active / Total Size (% used) : 1491537.88K / 1519791.56K (98.1%) Minimum / Average / Maximum Object : 0.02K / 0.17K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 4240470 4240319 99% 0.12K 141349 30 565396K pid 2245140 2219675 98% 0.25K 149676 15 598704K size-256 2238090 2210087 98% 0.12K 74603 30 298412K size-128 ...

    Read the article

  • Options for synchronizing Palm Desktop calendars

    - by Al Everett
    My wife and I each have Palm Centro phones and very full calendars. We've been using Palm products for years and are happy with them and are not looking to switch (and don't have the budget for it even if we wanted to). What is a viable way we can synchronize our calendars? Back when we had Palm Z22s we used AirSet. It worked great in synchronizing our desktop calendars. Unfortunately, their sync software does not support Palm Desktop v6 and there is nothing in the pipeline to support it. (The third party vendor is apparently not interested in updating for the newer Palm desktop.) I would love to be able to get back to having each other's appointments appear on the other's calendar. What can we do? Some limitations: A data plan is not an option (so this precludes over-the-air synchronization options) No Microsoft Outlook Edit: Syncing my Palm to my Palm Desktop is not the issue. Being able to sync, in some way, the Palm calendar databases of my wife's and my calendars is what's desired.

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • Sendmail in Nexenta core 3.0.1

    - by maximdim
    I'm trying to setup sendmail in Nexenta core 3.0.1 (Solaris based OS). All I want is to be able to send emails from that host - like notifications about failures, cron jobs output etc. Initially Nexenta core doesn't have sendmail so here is what I've done: apt-get install sunwsndmu Now there is a sendmail in /usr/sbin/sendmail. When I try to send email from command line: $mail maxim test . It doesn't give me any error but in log file I see: Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: from=maxim, size=107, class=0, nrcpts=1, msgid=<201012201741.oBKHf8u7012295@nas>, relay=maxim@localhost Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: to=maxim, ctladdr=maxim (1000/10), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30107, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1] So I guess I need to have SMTP service running. How do I do that in Nexenta? svcs -a | grep sendmail doesn't return anything and # svcadm enable sendmail svcadm: Pattern 'sendmail' doesn't match any instances I'm not married to sendmail so if there are easier ways to achieve y goal I'm open to suggestions as well. Thanks,

    Read the article

  • Chmod 644 on /etc/ any way to fix?

    - by DazSlayer
    I tried to tab complete something and I guess it wasnt there. I know you are not supposed to set the permissions to /etc/ like that, but my permissions seem to be all messed up. whoami prints out cannot find name for user ID 1002 and I cannot cd into /etc/ anymore. passwd and shadow use 640 and 644 so I am not sure why this is a problem. Regardless, is there any way to fix this? The command run was sudo chmod 644 /etc/ I have no name!@vpn-server:/$ whoami whoami: cannot find name for user ID 1002 I have no name!@vpn-server:/$ cd etc bash: cd: etc: Permission denied I have no name!@vpn-server:/$ ls -al etc d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. d????????? ? ? ? ? ? acpi -????????? ? ? ? ? ? adduser.conf I have no name!@vpn-server:/$ sudo su sudo: can't open /etc/sudoers: Permission denied

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • To update or to not update?

    - by Massimo
    Since starting working where I am working now, I've been in an endless struggle with my boss and coworkers in regard to updating systems. I of course totally agree that any update (be it firmware, O.S. or application) should not be applied carelessly as soon as it comes out, but I also firmly believe that there should be at least some reason if the vendor released it; and the most common reason is usually fixing some bug... which maybe you're not experiencing now, but you could be experiencing soon if you don't keep up with . This is especially true for security fixes; as an examle, had anyone simply applied a patch that had already been available for months, the infamous SQL Slammer worm would have been harmless. I'm all for testing and evaluating updates before deployng them; but I strongly disagree with the "if it's not broken then don't touch it" approach to systems management, and it genuinely hurts me when I find production Windows 2003 SP1 or ESX 3.5 Update 2 systems, and the only answer I can get is "it's working, we don't want to break it". What do you think about this? What is your policy? And what is your company policy, if it doesn't match your own?

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Package upgrade on Ubuntu raid server and grub setup issue

    - by RecNes
    I have remote Ubuntu 10.10 server running on raid system. I did package upgrade yesterday night for security reasons. During the upgrade, grub installation screen appeared and asked me which partition I wanted to install grub. Options are sda,sdb,md1 and md2. I decide to install them on both sda and sdb partitions. I wondering, was I make true decision? If machine get reboot is it can be boot up safely? You can find fdisk output and fstab mount points below: Fstab: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md0 none swap sw 0 0 /dev/md1 /boot ext3 defaults 0 0 /dev/md2 / ext3 defaults 0 0 Fdisk: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029bb5 Device Boot Start End Blocks Id System /dev/sda1 1 262 2102562 fd Linux raid autodetect /dev/sda2 263 295 265072+ fd Linux raid autodetect /dev/sda3 296 91201 730202445 fd Linux raid autodetect Disk /dev/md0: 2152 MB, 2152923136 bytes 2 heads, 4 sectors/track, 525616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 271 MB, 271319040 bytes 2 heads, 4 sectors/track, 66240 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md2: 747.7 GB, 747727224832 bytes 2 heads, 4 sectors/track, 182550592 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00088969 Device Boot Start End Blocks Id System /dev/sdb1 1 262 2102562 fd Linux raid autodetect /dev/sdb2 263 295 265072+ fd Linux raid autodetect /dev/sdb3 296 91201 730202445 fd Linux raid autodetect

    Read the article

  • Ubuntu 13.10 on Acer V5-472 with HD 4000

    - by Hyperboreus
    I have an Acer V5-472 with intel HD4000 graphics chipset and a built-in 1366*768 display. I have installed ubuntu 13.10 amd64 in legacy boot mode with an external monitor. Installation showed no problems, I can boot from HDD and log into my system. The internal display doesn't work and I have to use an external monitor. I have tried the following (found in other threads) to no avail: Setting grub option "acpi_backlight=vendor" or "acpi_osi=Linux" or both. Installing the intel HD drivers for Linux from their homepage. Running in circles, screaming and shouting. The internal display lights up (I can change the brightness with Fn-Left and Fn-Right) but that's all. When I boot, I get a purple splash screen and from then only the external monitor works. I read somewhere that this might be a problem with kernel 3.11? Has anybody ubuntu running on an Acer V5-472? Should I change ubuntu version or use 32-bit instead? In general, how can I get the internal display to work? Edit: The settings-display dialogue shows the internal display correctly with supported resolution of 1366.

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • Mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=password Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=password Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

  • Service Unavailable - App Pool disabled error...anyone know why?

    - by andy
    Hi guys A small number of our sites intermediately experience an error that is a mystery to us. Our set up is: Windows Server 2003 IIS6 .NET 3.5 We have about 35 Websites running on IIS Each site has it's own App Pool Each App Pool has the identity: Network Service The symptom is: The Application Pool for a site stops, and when you browse to the site, you only see: SERVICE UNAVAILABLE In the Event Logs, we have one Error, that looks like this: Application pool 'AppPool1' is being automatically disabled due to a series of failures in the process(es) serving that application pool. And we have several Warnings leading up to the error, that look like this (note: the warnings look the same, except for the process id): A process serving application pool 'AppPool1' suffered a fatal communication error with the World Wide Web Publishing Service. The process id was '292'. The data field contains the error number. What's causing this? As I said, it doeasn't happen very often....the last time was about 6 months ago...and it usually happens with the same site/App pool. Any ideas? cheers!

    Read the article

  • Kerberos service on win2k dc will not start following disk failure

    - by iwilson68
    Hi, I have a win2k (mixed mode domain) with 4 DCS. One of these also acts an exchange 2000 server which uses 2 logical volumes from an MSA 2000 array. AD etc is stored on local drives. We experienced a problem last week when the raid array fell back to a redundant controller and this temporarily meant that the two logical drives were not visible to the server for around 5 minutes and a couple of reboots. The log records these Events as Type: Warning Event Source: Disk Event Category: None Event ID: 51 Date: 06/11/2009 Time: 11:46:23 User: N/A Computer: server1 Description: An error was detected on device \Device\Harddisk1\DR1 during a paging operation. Following these problems, the server “kerberos Key Distribution” service refuses to start with an “error.31 a device attached to the system is not functioning”. All other automatic start services (including net logon) are running and there are no DNS issues etc. All devices are also functioning but the two logical MSA disks are now numbered in the Windows Disk Management MMC as 2 and 4 and I suspect that they may have previously been identified as disks 1 & 2 and perhaps windows still sees this as an ongoing failure?? Replication has not been affected but obviously there are many audit failures in the security log relating to users and workstations presumably linked to the Kerberos issue. Attempting to manually start the kerberos service generates the following in the System Log. Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 09/11/2009 Time: 09:46:55 User: N/A Computer: Server1 Description: The Kerberos Key Distribution Center service terminated with the following error: A device attached to the system is not functioning. DCDIAG passes all tests except “Advertising” and “Services” which I believe relate directly to the failure of Kerberos only. Any advice would be appreciated.

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >