Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 349/405 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Client A can ping server S, but client B cannot

    - by Soundar Rajan
    I moved the question to here from stackoverflow.com http://stackoverflow.com/questions/2917569/unable-to-ping-server-from-client-b-but-able-to-ping-from-client-a-please-help I am trying to configure a IIS 6.0/Windows Server 2003 web server with a ASP.net application. When I try to ping the server from client computer A I get the following: PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping succeeds! From client computer B, BOTH the pings fail. PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping fails with a message "Ping request could not find host 74.208.192.xxx:80" Both clients A and B are on the same subnet. The server is outside (a virtual server hosted by an ISP) I have an ASP.NET application in a virtual directory on the server. In IE or firefox, if I enter http://74.208.192.xxx/subdir/subdir/../Default.aspx, it works from both the clients! The server has default firewall settings but web server enabled (Port 80 is open). From client A (note the different 'reply to' address when the ping succeeds. C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx Pinging 74.208.192.xx with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.xx: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx:80 Pinging 74.208.192.xx:80 [208.67.216.xxx] with 32 bytes of data: Reply from 208.67.216.xxx: bytes=32 time=35ms TTL=54 ... Reply from 208.67.216.xxx: bytes=32 time=33ms TTL=54 Ping statistics for 208.67.216.xxx: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 32ms, Maximum = 54ms, Average = 38ms From client B C:\Documents and Settings\user>ping 74.208.192.81 Pinging 74.208.192.81 with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.81: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Documents and Settings\user>ping 74.208.192.81:80 Ping request could not find host 74.208.192.81:80. Please check the name and try again. My main problem is I have a web service (asmx) file and the web service client program is not able to access it from client B, but able to access it from client A. I am trying to find out why and thought this ping issue may shed some light. I can ping yahoo.com both the computers.

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • Cannot login to SQL Server 2008 R2 with Windows authentication

    - by Ian Boyd
    When i try to connect to SQL Server (2008 R2) using Windows authentication: i cannot: Checking the Windows Application event log, i find the error: Login failed for user 'AVATOPIA\ian'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ] Log Name: Application Source: MSSQLSERVER Event ID: 18456 Level: Information User: AVATOPIA\ian OpCode: Task Category: Logon i can login to the computer itself using Windows authentication. i can log into SQL Server using the local Windows Administrator account. We can connect to 8 other SQL Servers on the domain using Windows Authentication. Just this one, whitch is the only one that is 2008 R2 is failing. So i assume it's a bug with *2008 R2. Note: i cannot logon locally, or remotely, using Windows authentication. i can login locally and remotely using SQL Server Authentication. Update Note: It's not limited to SQL Server Management Studio, standalone applications that connect using Windows authentication: fail: Note: It's not a client problem, as we can connect fine to other (non-SQL Server 2008 R2 machines): i'm sure there's a technote or knowledge base article describing why SQL Server 2008 R2 is broken by default, but i can't find it. Update 2 Matt figure out the change that Microsoft made so that SQL Server 2008 R2 is broken by default: Administrators are no longer administrators All that remains is to figure out how to make Administrators administrators. One of these days i'm going to start a list of changes around Microsoft's "broken by default" initiative. Steps to reproduce the problem How do i add a group to the sysadmin fixed server role? Here's the steps i try, that don't work: Click Add: Click Object Types: Ensure that you have no ability to add groups: and click OK. Under Enter the object names to select, enter Administrators: Click Check Names, and ensure that you are not allowed to add groups: and click Cancel. Click Browse..., and ensure that you have no ability to add groups: You should now still not have added any group to the sysadmin role. Additional information SQL Server Management Studio is being run as an administrator: SQL Server is set to use Windows Authentication: tried while logged into SQL with both sa and the only other sysadmin domain account (screenshot can be supplied for those who don't believe)

    Read the article

  • Need to increase nginx throughput to an upstream unix socket -- linux kernel tuning?

    - by Ben Lee
    I am running an nginx server that acts as a proxy to an upstream unix socket, like this: upstream app_server { server unix:/tmp/app.sock fail_timeout=0; } server { listen ###.###.###.###; server_name whatever.server; root /web/root; try_files $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } Some app server processes, in turn, pull requests off /tmp/app.sock as they become available. The particular app server in use here is Unicorn, but I don't think that's relevant to this question. The issue is, it just seems that past a certain amount of load, nginx can't get requests through the socket at a fast enough rate. It doesn't matter how many app server processes I set up, it doesn't even matter what the app is (tried it with a dummy app with just a single endpoint that returned an empty page with status 404). The bottleneck seems to be the socket, not the app. I'm getting a flood of these messages in the nginx error log: connect() to unix:/tmp/app.sock failed (11: Resource temporarily unavailable) while connecting to upstream Many requests result in status code 502, and those that don't take a long time to complete. The nginx write queue stat hovers around 1000. Anyway, I feel like I'm missing something obvious here, because this particular configuration of nginx and app server is pretty common, especially with Unicorn (it's the recommended method in fact). Are there any linux kernel options that needs to be set, or something in nginx? Any ideas about how to increase the throughput to the upstream socket? Something that I'm clearly doing wrong? Additional information on the environment: $ uname -a Linux app1 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] $ unicorn -v unicorn v4.3.1 $ nginx -V nginx version: nginx/1.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Current kernel tweaks: net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.route.flush = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.core.somaxconn = 8192 net.netfilter.nf_conntrack_max = 131072

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • DPM 2010 "Disk failed or disk not found"

    - by SysAdmin
    I have an HP Proliant ML110 G5 server with Windows server 2008R2 only dedicated for DPM 2010. This server has a limit in HD of 8TB which has already been met. I'm now stuck in this situation where my disk keeps failing "Disk failed or Disk not found" in the disk management. Only after I reboot the system the disk comes back up. Today I was running my monthly tape backup on a certain protection group and the disk failed again while the tape job was running (so the job wasn't completed). This is the description of the error in the alerts: "The disk Disk 1 - Hitachi HDS722020ALA330 SCSI Disk Device cannot be detected or has stopped responding. All subsequent protection activities that use this disk will fail until the disk is brought back online. (ID 3120)". My backup system is becoming useless! I don't think that is a hardware issue (please correct me if I'm wrong) since the HD works fine for a certain period of time which is becoming shorter and shorter. I basically have no more option to fix this problem. I tried to fix any error that was coming up in the event viewer with no luck (included one regarding the SQL2008 compatibility issue). The disk keeps failing! Now I'm only trying to recover/migrate the data from the disk that is having problem but my issue now is that I cannot add any drives to my server since I already got installed the maximum storage capacity 8TB. I thought about 2 simple options. Please tell me what you guys think about it; Unplug one of the 2 storage pool disks (disk0, that one without problem) from the machine and install a new one in order to migrate the data with the Migration tool for DPM. Remove the defective disk (disk1), put back the disk0 and run the synchronization/consistency check on all the groups to recreate replicas and recovery points. Run diskpart.exe and clean up the disk (loosing all data) and hoping that he will work after I sync all the protection groups. Both solutions are not elegant but I have no better options at the moment. Please I need some help. Thanks for your time Angelo

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • OpenBSD pf 'match in all scrub (no-df)' causes HTTPS to be unreachable on mobile network

    - by Frank ter V.
    First of all: excuse me for my poor usage of the English language. For several years I'm experiencing problems with the 'match in all scrub (no-df)' rule in pf. I can't find out what's happening here. I'll try to be clear and simple. The pf.conf has been extremely shortened for this forum posting. Here is my pf.conf: set skip on lo0 match in all scrub (no-df) block all block in quick from urpf-failed pass in on em0 proto tcp from any to 213.125.xxx.xxx port 80 synproxy state pass in on em0 proto tcp from any to 213.125.xxx.xxx port 443 synproxy state pass out on em0 from 213.125.xxx.xxx to any modulate state HTTP and HTTPS are working fine. Until the moment a customer in France (Wanadoo DSL) couldn't view HTTPS pages! I blamed his provider and did no investigation on that problem. But then... I bought an Android Samsung Galaxy SII (Vodafone) to monitor my servers. Hours after I walked out of the telephone store: no HTTPS-connections on my server! I thought my servers were down, drove back to the office very fast. But they were up. I discovered that disabling the rule match in all scrub (no-df) solves the problem. Android phone (Vodafone NL) and Wanadoo DSL FR are now OK on HTTPS. But now I don't have any scrubbing anymore. This is not what I want. Does anyone here understand what is going on? I don't. Enabling scrubbing causes HTTPS webpages not to be loaded on SOME ISP's, but not all. In systat, I strangely DO see a state created and packets received from those ISP's... Still confused. I'm using OpenBSD 5.1/amd64 and OpenBSD 5.0/i386. I have two ISP's at my office (one DSL and one cable). Affects both. This can be reproduced quite easily. I hope someone has experience with this problem. Greetings, Frank

    Read the article

  • What do you use to store all of your personal data?

    - by codeflunky
    I have been on a quest for years to find the perfect tool to store all "my stuff". You know... personal information, code snippets, software keys, people's birthdays, whatever. There are lots of tools out there for this sort of thing, but I've never found any of them quite what I need. Ideally, I would just be able to type some notes, tag them (I don't like the idea of folder organization... too cumbersome) and then easily search and retrieve what I need later. It seems so simple, but for some reason I just can't find it. I currently use Backpack (sometimes), which is OK, but I hate the fact that you always have to create "pages" to store things. I don't want to have to do that. I want to just type some notes, tag it and save. That's it. And Backpack didn't even have search for a long time. What I do like about Backpack is that it's fast and it's web based. I've tried some desktop apps, which probably came closer to the functionality I want, but I just hate being tied to a single machine. I want to be able to get to my stuff anywhere, so the web based thing is a definite requirement. Anyway, I'm thinking about writing my own thing for this if I can't find anything, but before I make the attempt, I was wondering if anyone has any suggestions? I've used Backpack, Zoho Planner, Stikkit and Google Notes so far, and they are not quite to my liking. Anyone? (Sorry if this is off-topic, but I figured you guys might be legitimately into this kind of thing... you know, storing code snippets and such.) UPDATE: I've been using Evernote for a few days, and it is exactly what I've been looking for. It is totally tag based and allows both online and offline usage. The desktop app sits in your system tray and allows you to add whatever you want on the fly either as text notes or clippings from the browser. It also syncs it to the web (if you want) where you can get to it from anywhere using their web client. They even have a mobile client which I haven't used, but I will try it soon. Thanks again 18hrs. I wish I could give you 10 upvotes.

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • How to grow to be global sysadmin of an organization?

    - by user64729
    Bit of a non-technical question but I have seen questions of the career development type on here before so hopefully it is fine. I work for a fast growing but still small organization (~65 employees). I have been their external sysadmin for a while now, looking after hosted Linux servers and infrastructure. In the past 12 months I have been transforming into the internal sysadmin for our office too. I'm currently studying Cisco CCNA to cover the demands of being an internal sysadmin and looking after the office LAN, routers, switches and VPNs. Now they want me to look after the global sysadmin function of the organization as a whole. The organization has 3 offices in total, 2 in the UK and 1 in the US. I work in one of the UK offices. The other offices are primarily Windows desktops with AD domain shops. My office is primarily a Linux shop with a file-server and NFS/NIS (no AD domain for the Windows desktops yet but it's in the works). Each other office has a sysadmin which in theory I am supposed to supervise but in reality each is independent. I have a very competent junior sysadmin working with me who shares the day-to-day tasks and does some of the longer term projects with my supervision. My boss has asked me how to grow from being the external sysadmin to the global sysadmin. I am to ponder this and then report back to him on how to achieve this. My current thoughts are: Management training or professional development - eg. reading books such as "Influencer" and "7 Habits". Also I feel I should take steps to improving communication skills since a senior person is expected to talk and speak out more often. Learn more about Windows and Active Directory - I'm an LPI-certified guy and have a lot of experience in Linux (Ubuntu or desktop, Debian/Ubuntu as server). Since the other offices are mainly Windows-domains it makes sense to skill-up in that area so I can understand what the other admins are talking about. Talk to previous colleagues who have are are in this role already - to try and get the benefit of their experience. Produce an "IT Roadmap" or similar that maps out where we want the organization to be and when, plotted out over the next couple of years with regards to internal and external infrastructure. I have produced a "Security roadmap" already which does cover some of these things. I guess this can summed up as "thinking more strategically"? I'd appreciate comments from anyone who has been through a similar situation, thanks.

    Read the article

  • Detecting upload success/failure in a scripted command-line SFTP session?

    - by Will Martin
    I am writing a BASH shell script to upload all the files in a directory to a remote server and then delete them. It'll run every few hours via a CRON job. My complete script is below. The basic problem is that the part that's supposed to figure out whether the file uploaded successfully or not doesn't work. The SFTP command's exit status is always "0" regardless of whether the upload actually succeeded or not. How can I figure out whether a file uploaded correctly or not so that I can know whether to delete it or let it be? #!/bin/bash # First, save the folder path containing the files. FILES=/home/bob/theses/* # Initialize a blank variable to hold messages. MESSAGES="" ERRORS="" # These are for notifications of file totals. COUNT=0 ERRORCOUNT=0 # Loop through the files. for f in $FILES do # Get the base filename BASE=`basename $f` # Build the SFTP command. Note space in folder name. CMD='cd "Destination Folder"\n' CMD="${CMD}put ${f}\nquit\n" # Execute it. echo -e $CMD | sftp -oIdentityFile /home/bob/.ssh/id_rsa [email protected] # On success, make a note, then delete the local copy of the file. if [ $? == "0" ]; then MESSAGES="${MESSAGES}\tNew file: ${BASE}\n" (( COUNT=$COUNT+1 )) # Next line commented out for ease of testing #rm $f fi # On failure, add an error message. if [ $? != "0" ]; then ERRORS="${ERRORS}\tFailed to upload file ${BASE}\n" (( ERRORCOUNT=$ERRORCOUNT+1 )) fi done SUBJECT="New Theses" BODY="There were ${COUNT} files and ${ERRORCOUNT} errors in the latest batch.\n\n" if [ "$MESSAGES" != "" ]; then BODY="${BODY}New files:\n\n${MESSAGES}\n\n" fi if [ "$ERRORS" != "" ]; then BODY="${BODY}Problem files:\n\n${ERRORS}" fi # Send a notification. echo -e $BODY | mail -s $SUBJECT [email protected] Due to some operational considerations that make my head hurt, I cannot use SCP. The remote server is using WinSSHD on windows, and does not have EXEC privileges, so any SCP commands fail with the message "Exec request failed on channel 0". The uploading therefore has to be done via the interactive SFTP command.

    Read the article

  • Linux Mint 14 severe overheating

    - by agvares
    I've tried switching to Mint (it was Mint 12) about 6 months ago, but failed to do it due to enormous overheating which made it virtually impossible to run any applications more complex than Gedit. Time had passed, and now I'm back again with Mint 14, but feeling way more determined. What I face are the following issues: 1) Great overheating (and by great I mean that just by doing nothing my CPU temp is floating around 70-75 C, which I find a lot) 2) Running multiple applications (let's say Chrome, Skype and Pidgin) results in critical overheating and immediate shutdown of the system 3) Due to the stuff listed above, my battery drains in about 10-15 minutes, pretty much turning my laptop into a crippled desktop machine I've got a HP-dv6 laptop (i7, 6gb RAM, dual graphics) Here's the output of lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 0d:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 13:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5209 PCI Express Card Reader (rev 01) 19:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) what i've tried to do already: 1) I've edited my grub file to add some "splash" arguments there 2) Installed jupiter and powertop 3) Tried to upgrade to newer kernels (up to 3.8), btw running anything newer than 3.5 results in both resolution and wi-fi detection fail 4) Read lots of forum threads devoted to the topic So, my question is simple: What else might I do to become finally able to use Mint as my default OS without the risk of being burned alive by the CPU heat?

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Scientific Linux - mysql and apachefail to start on reboot

    - by Derek Deed
    Both mysqld and httpd fail to restart following a reboot of the server, although chkconfig --list shows both daemons set to on for run levels 2,3,4 & 5 All control is being exectuted via Webmin Reboot server – MySQl and Apache not running MySQL Database Server MySQL version 5.1.69 MySQL is not running on your system - database list could not be retrieved. Click this button to start the MySQL database server on your system with the command /etc/rc.d/init.d/mysqld start. This Webmin module cannot administer the database until it is started. Apache Webserver Apache version 2.2.15 Start Apache Search Docs.. Global configuration Existing virtual hosts Create virtual host Select all. | Invert selection. Default Server Defines the default settings for all other virtual servers, and processes any unhandled requests. Address Any Port Any Server Name Automatic Document Root /var/www/drupal Virtual Server Processes all requests on port 443 not handled by other virtual servers. Address Any Port 443 Server Name Automatic Document Root /var/www/drupal Select all. | Invert selection. chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart Apache chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart MySQL chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off Everything now running okay; but no difference in the chkconfig outputs above. Set chkconfig --levels 235 httpd on /etc/init.d/httpd start The same for mysqld but no change in operation. Log files show that the shutdown has been completed successfully; but there is no indication of the service restarting until it is executed manually: 131112 13:59:15 InnoDB: Starting shutdown... 131112 13:59:16 InnoDB: Shutdown completed; log sequence number 0 881747021 131112 13:59:16 [Note] /usr/libexec/mysqld: Shutdown complete 131112 13:59:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 131112 14:09:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131112 14:09:52 InnoDB: Initializing buffer pool, size = 8.0M 131112 14:09:52 InnoDB: Completed initialization of buffer pool [Tue Nov 12 13:59:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 13:59:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 13:59:13 2013] [notice] Digest: done [Tue Nov 12 13:59:14 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Tue Nov 12 13:59:14 2013] [notice] caught SIGTERM, shutting down [Tue Nov 12 14:27:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 14:27:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 14:27:13 2013] [notice] Digest: done [Tue Nov 12 14:27:13 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations Is anyone able to shed any light on this problem, Cheers, Derek.

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

  • Troubleshooting an unstable internet connection

    - by Konrad Rudolph
    My MacBook Pro running OS X (10.9, but I had the same problem before) is connected to a Belkin router via WiFi and, using Virgin Media as the ISP, to the internet. The connection is extremely unstable – on some days, I get a ping timeout every few seconds. In addition, some domains seem to suffer general connectivity issues. For instance, I often find that while the youtube.com website loads, none of the videos (which are hosted on a separate domain) do. At other times, videos load but always fail to buffer, even though the actual connection speed is ok, even though I’ve disabled dash playback. Since I’m living in a rented room and the ISP contract isn’t actually mine I’ve got only limited possibilities of addressing the problem. In particular, I have no access to the router configuration and my non tech savvy landlady, while sympathetic, is not in a great hurry to hand the problem over to the ISP’s customer support. What’s more, I seem to be the only person in the house experiencing these problems – but I can imagine that this is simply because I’m the only one who’s using the internet continuously. I’m searching for specific tests that might be able to pinpoint – and ideally solve – the problem. So far all I’ve managed to do is establish that Virgin is routing my traffic in mysterious ways. Here’s an excerpt from traceroute google.co.uk. It’s worth mentioning that the host name doesn’t seem to matter a lot, the trace route is always the same. traceroute: Warning: google.co.uk has multiple addresses; using 62.254.36.148 traceroute to google.co.uk (62.254.36.148), 64 hops max, 52 byte packets 1 (192.168.2.1) 1.112 ms 1.300 ms 2.359 ms 2 10.100.32.1 (10.100.32.1) 11.926 ms 10.217 ms 24.987 ms 3 cmbg-core-1a-ae3-610.network.virginmedia.net (80.1.202.93) 28.809 ms * 66.653 ms 4 popl-bb-1b-ae16-0.network.virginmedia.net (212.43.163.141) 13.759 ms 126.504 ms 20.472 ms 5 nrth-bb-1b-et-010-0.network.virginmedia.net (62.253.175.57) 28.357 ms 16.398 ms 42.387 ms 6 nrth-bb-1c-ae1-0.network.virginmedia.net (62.253.174.110) 27.441 ms 15.622 ms 12.044 ms 7 lutn-icdn-1-ae0-0.network.virginmedia.net (62.253.175.82) 16.678 ms 28.463 ms 28.253 ms 8 * * * 9 * * * 10 * * * ^C If I let it, this goes on until the end of time. It never seems to reach a destination. Is this normal? A friend living in the same town who is also with Virgin Media has a more conventional traceroute output: 7 hops to google.co.uk, all of which send the ICMP TIME_EXCEEDED response. The obvious fix – rebooting the router – doesn’t seem to help. As far as I can tell, the WiFi connection is stable (I can always ping the router) so the problem is further downstream. I’ve tried using an alternative DNS before (OpenDNS) but if anything, this made things worse. In fact, it made all Google services nigh unreachable.

    Read the article

  • Windows XP long login (15 minutes +)

    - by Emily Pinkerton
    I'm having a lot of issues with our Windows XP SP3 machines (about 5, but every week another gets on the bandwagon of this issue). They take forever (15 minutes) to apply the user settings once our employee's enter their username and password to login to our domain. It only happens say if a user has reboot the machine and then when they go to log back in then it hangs forever. Reboot and restart are the key words for sure I've noticed with this issue. Here are things I have tested: •Made sure the DNS was set to point to our two servers (Server01 & Server02 are DNS Domain Controllers, 01 is primary and 02 backup). •No major changes have been applied to our network. •All profiles are local, so I have deleted out local profiles that aren't being used on those machines that run slow. •Also I have tried to enable and disable the Enable Fast Login under the local machines GP. It was not configured originally and when I tested both, it made the computer hang on "applying computer settings" for about 15 minutes. When it finally came up to the login screen the it was very quick to login to the domain. However this doesn't fix my issue, and even more frustrating upon setting it back to being not configured it now still takes for forever to apply computer settings. •I enabled the userenv log and here is what I see, but my experience is limited and I'm not sure how to read it exactly. (see below for log, this isn't the whole thing because it's really long) USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: LoadUserProfileP succeeded USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: Returning success. Final Information follows: USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-UserName = USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-lpProfilePath = < USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-dwFlags = 0x0 USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: Returning TRUE. hProfile = <0x818 USERENV(2ec.2f0) 10:50:41:984 IsSyncForegroundPolicyRefresh: Synchronous, Reason:NonCachedCredentials USERENV(2ec.248) 10:50:41:984 IsSyncForegroundPolicyRefresh: Synchronous, Reason:NonCachedCredentials USERENV(3c4.3dc) 10:51:26:166 LibMain: Process Name: C:\WINDOWS\system\wbem\wmiprvse.exe USERENV(2ec.5cc) 11:05:08:741 ProcessGPOs: network name is 192.168.49.0 USERENV(4a8.888) 11:05:08:804 GetProfileType: Profile already loaded. USERENV(4a8.888) 11:05:08:804 LoadProfileInfo: Failed to query central profile with error 2 USERENV(4a8.888) 11:05:08:804 GetProfileType: ProfileFlags is 0 Also this error is in the file quite a lot: USERENV(328.5bc) 11:05:29:733 GetUserDNSDomainName: Failed to impersonate user USERENV(328.834) 11:05:29:733 ImpersonateUser: Failed to impersonate user with 5. I'm really not sure what else to do with my limited experience, but I'm hoping someone can help me. I feel like I'm dealing with an issue way above my level and any knowledge I can gain out of getting this issue fixed would be amazing.

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • Problem with authentication of users via IE when using "host header value"

    - by Richard
    Hi, I'm trying to set multiple web sites up in an IIS 6. I've got a working virtual site residing under the default web site, but if I create a new web site in the IIS and asign it a host header value, let it point to the very same file structure as in the prevoiusly mentioned site and finally asign windows integrated security only to the site - I still cannot log in to the new site using MSIE 6 or 8 but FF 3.5 works fine. In the web log I get these entries if I access the localhost site 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/Default.asp - 80 xxx\Administrator 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 200 0 0 If I however access via the host headre value site I get prompted to login but the login fail and I also get an error "401 1 2148074252" which not present when it succeeds. Can this be the issue? Pre login screen 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 post login screen (note that win credentials have not been submitted) 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 Firefox will try to access using anonymous access and will prompt for login, after submitting win credentials it all works fine. For what reasoon is the IE so stubornly refusing to submit credentials to the "host header value" site? The site is in the Local intranet Zone and login is ticked for that zone. No teaming NIC's no FW, no nothing, I'm cluless :( /Richard

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >