Search Results

Search found 17984 results on 720 pages for 'log shipping'.

Page 479/720 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Windows 7 system CPU bogged by windows services, no explanation

    - by Alex
    I'm looking at a laptop for a colleague which is running terribly slow. A quick look showed that the CPU was 100% used by 2-3 SVCHost processes, which off course doesn't tell much since those are just 'cover' processes with services running underneath them. So I fired up process explorer in hopes of finding a shady rogue service which was bogging the system, but to my suprise I found genuine MS Windows processes (or at least damn-good disguised ones) are bogging down the system: dnscache (DNS Client) IKEEXT (IKE and AuthIP IPSec Keyring modules) iphlpsvc (IP Helper) Seen separately, these processes might seem odd to be using a lot of CPU, but taking a step back one can conclude that all three services are quite closely related to networking. I've tried running: netsh int ip reset log.txt which has helped me save bizarre network-related problems in the past, but this didn't help Off course I though about a virus, but both MS Security Essentials as well as malwarebytes (let both run a full scan).

    Read the article

  • Error sysprepping Windows 8 Enterprise 90-day trial

    - by Philip
    I am using the Windows 8 Enterprise 90-day trial to evaluate the latest version of Windows for a private school. The way I work is that I use sysprep to prepare an generalized image, then I clone it to the school's computers. When I follow the instructions and try sysprep on my installation of Windows 8 in VirtualBox, sysprep thinks briefly and gives me an error message: Fatal error occurred while trying to sysprep the machine. Once I acknowledge it, sysprep closes. I checked the Windows Event Log, and there's nothing there that I could see. I also followed some instructions to cure this problem, but nothing changed. The error remains. My best guess is that the 90-day trial prohibits the use of sysprep, but I can't be sure. It might also be my use of VirtualBox, or who-knows-what. Has anyone had success with this, or encountered the same issue on real hardware?

    Read the article

  • Which particular file caused "Delayed write failed" error?

    - by user35020
    I sometimes get this error when resuming from hibernation: Delayed Write Failed: Windows was unable to save all the data for the file G:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. I know this is caused because the hard drive (G:, an external USB drive) was (a) plugged in when I hibernated and wasn't ready at wake-up, or (b) I simply forgot to plug it when resuming from hibernation. My question is: is there any way to see which particular file/folder/etc failed to be written? The hard drive functions correctly before and after, so there seems to be no permanent damage. Is there a detailed log someplace or a utility? I've searched but found nothing. Thanks for any help!

    Read the article

  • Problem with Lenovo x200s Wifi under Ubutu Karmic

    - by oneself
    Hi, I have just gotten my Lenovo X200s laptop, and I am install Ubuntu 9.10 Karmic on it. The installation went through without a hitch, but I can't get my wifi to work. lspci | grep Network Produces the following results: 00:19.0 Ethernet controller: Intel Corporation 82567LM Gigabit Network Connection (rev 03) 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. Device 8172 (rev 10) The weird part is that when I turn the wifi hardware stitch on and off on the side of the laptop, I get the following printed in /var/log messages: Dec 30 23:24:48 temp-laptop kernel: [ 213.432302] usb 4-2: USB disconnect, address 2 Dec 30 23:24:52 temp-laptop kernel: [ 217.276310] usb 4-2: new full speed USB device using uhci_hcd and address 3 Dec 30 23:24:52 temp-laptop kernel: [ 217.441759] usb 4-2: configuration #1 chosen from 1 choice Does Ubuntu think my wifi card is a USB device? Am I missing some driver? What can I do to fix this? Please, help!

    Read the article

  • Automatic repair software

    - by ADOConnection
    Do anyone knows any kind of apps or services for "taking care of servers"? (besides managed servers) There are hundreds of ways your server or application can stop working properly. Small things are easy to miss but usually easy to fix. Log overgrouth, configuration issues, etc. Of course there are best practice checklists, but its not a human task to check configuration best practices. Im sure it can be automated: some kind of agent can monitor all system settings, say what is right and wrong and give suggestions on how to make it right. I have to admin several servers and I need some kind of overview of overall situation. As well as a tool, that will fix problems automatically. Can you people suggest something? (I know its a little bit out of rules of SF, but I think this particular question is quite specific) It would be great to have something like http://stackoverflow.com/questions/1451319/asp-net-mvc-view-engine-comparison but for automation software.

    Read the article

  • Can I Store MediaWiki Files on the cloud?

    - by user219048
    I recently got a chromebook, and I've been brainstorming different ways to put mediawiki on it (with localhost, not a server). One way I've read about online is to go into developer mode to download and set up LAMP. I was wondering, wouldn't I be able to store the apache, mysql, php, and mediawiki files on the cloud (google drive)? And if so, would anything prevent me from accessing my wiki on any other computer's localhost, assuming I could just log into Google Drive to access these files? Might there be any reduced performance when operating from the cloud?

    Read the article

  • Chrome logs me out of everything when I exit--tried cookie-related stuff already

    - by GreatBigBore
    I've been using Chrome very successfully for a long time. It has always kept me logged in to all my sites even after exiting the app. Recently it started logging me out of everything when I exit Chrome. I've fooled around with all the various advanced cookie settings, and I've cycled through the options hoping that Chrome just needed a wakeup call or a reset or something. I've also deleted all the cookies in case a corrupted one is confusing Chrome. Nothing works! I see cookies when I log in, but they all go away when I exit Chrome. I've searched all over the place and seen only the standard answers relating to resetting cookies, local data, sessions, that sort of thing. Any Chrome gurus out there, please send a telepathic message to my browser asking it to resume its previous excellent behavior. Alternatively, you could suggest other possible solutions.

    Read the article

  • Monitor someone on server

    - by edo
    Im in the unfortunate position of having to give someone who I do not fully trust privileged access to a webserver to finish work that they never completed. They will access the server remotely (ie I will not be able to see their screen). What can be done to a) proactively limit any potential damage and b) accurately log anything they do on the server for analysis afterwards, even if things seem ok? They will be updating a web application. Thanks in advance! --- More informtion: The server is a Ubuntu AWS server.

    Read the article

  • Use SSH reverse tunnel to bypass VPN [on hold]

    - by John J. Camilleri
    I have shell access to a server M, but I need to log into a VPN on my machine L in order to access it. I want to be able to get around this VPN, and I've heard I can do this by creating a reverse SSH tunnel and using a intermediate server E (which I can access without the VPN). This is what I am trying: Turn on VPN on L, open SSH session to M On M, execute the command: ssh -f -N -T -R 22222:localhost:22 user@E From L, try to open SSH session to E on port 22222, hoping to end up at M Step 2 seems to work without any complaint, but on step 3 I keep getting "connection refused". I have made sure that port 22222 is open on E: 7 ACCEPT tcp -- anywhere anywhere tcp dpt:22222 I'm pretty new to SSH tunnelling and not sure what the problem could be. Any ideas what I can try?

    Read the article

  • website lookup extremely slow in ubuntu

    - by ubuntulover
    Hi I have a wireless broadband connection through a router and wireless modem. Everything works fine in Windows. However, in ubuntu on the same machine, websites seem to take longer to start loading. I think the dns lookup is slow. I think https sites may be slower, as Ijust can't log in to gmail. I am also using a mercurial repo with remote origin, and it takes forever (like 5 minutes) to push one small change. I think it is because it has to communicate through https multiple times. Should I change my dns server? I've seen that I don't have these problems at my work network (they have another dns server). This happens with the IPv4 settings being automatic (dhcp). When I change it to automatic (dhcp) addresses only, and add google's 8.8.8.8 in the dns servers, it still takes forever. Why is this happening?

    Read the article

  • Why are Full GCs not running on my gcInterval I set?

    - by Brad Wood
    ColdFusion 10 Update 10 Windows Server 2008 R2 Java 1.7.0_21 I am trying to figure Full GCs to run every 10 minutes. I have used the gcInterval JVM arg in the past on earlier versions of ColdFusion with success, but I have confirmed with verbose GC logs that Full GCs are still happening on the hour (Unless the Old Gen gets so full that it forces a full collection). Here are the full JVM args from ColdFusion10\cfusion\bin\jvm.config (line breaks added for readability) Is there something else I need to be doing to get this working on ColdFusion 10? java.args= -server -Xms4072m -Xmx4072m -XX:PermSize=512m -XX:MaxPermSize=512m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -XX:+UseParallelGC -XX:+UseParallelOldGC -Xloggc:gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1024K -Xbatch -Dcoldfusion.home={application.home} -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random -Dcoldfusion.classPath={application.home}/lib/updates,{application.home}/lib,{application.home}/lib/axis2,{application.home}/gateway/lib/,{application.home}/wwwroot/WEB-INF/flex/jars,{application.home}/wwwroot/WEB-INF/cfform/jars

    Read the article

  • Rsyslog : copy with change the facility

    - by Dom
    I have saslauthd with save the logs in LOG_AUTH in our rsyslogd server. It can't be changed without recompiling, and I don't want to do that. I would like to see all the LOG_AUTH in LOG_MAIL, because I do an export to an external machine, and I would like to see all the saslauthd logs in LOG_MAIL in the distant server. Of course, in local I can add "auth.* " in the mail.log file section, but the export will not be in the right file because I filter in export by syslog Facility/Priority. How can I export all the AUTH logs into MAIL logs ? Thanks

    Read the article

  • What are the pros & cons of these MySQL engines for OLTP -- XtraDB, PBXT, or TokuDB?

    - by Continuation
    I'm working on a social website with an approximate read/write split of 90/10. Trying to decide on a MySQL engine. The ones I'm interested in are: XtraDB PBXT TokuDB What are the pros and cons of them for my use case? A few specific questions: PBXT uses log-based structure that avoids double-writes. It sounds very elegant, but the benchmark I've seen doesn't show any/much advantages over XtraDB. Do you have any experience with PBXT/XtraDB you can share? TokuDB sounds VERY interesting. But all the benchmarks I've seen are about single-threaded bulk inserts - inserting 100M rows for example. that's not very relevant for OLTP. What about its performance with large number of concurrent threads writing and reading at the same time? Anyone has tried that?

    Read the article

  • wamp server does not start: Windows 7, 64Bit

    - by Ram
    I am trying to install wamp server (following is the exact setup name) on windows 7, 64Bit OS. But it never starts, icon stays in orange color meaning some services did not start. wampserver2.2e-php5.3.13-httpd2.2.22-mysql5.5.24-x64 I have been searching from last 3hours but did not find any solution. Port 80 is not in use. In windows services, when I try to start wampapache service manually, it throws following error: Windows could not start the wampapache service on Local Computer. Error 1053: The service did not respond ot the start or control request in a timely fashion. apache_error.log is empty. Things use to work fine in windows XP. May be this is a repeated thread, but I did go through similar posts. But nothing worked! Please help!!

    Read the article

  • How to disable password change for openldap user?

    - by Keve
    Considering possible solutions for some improvements I run into this theoretical question and I couldn't find a satisfying answer. Some of you may have first-hand experience with this in practice, so here the question goes: How can I disable password changing for an OpenLDAP user? The account must stay enabled, allowed to log on to workstations and work as usual, but should not be able to change its own password. Can this be done? If so, how difficult is it to implement it? All suggestions are appreciated! For reference: Servers and workstations are to run a mixture of FreeBSD and OpenBSD. Accounts to get password disabled are student or generic workstation accounts. Environment is a school.

    Read the article

  • Ubuntu 12.10 Quantal Quetzal and AMD 12.11 Beta Driver

    - by White
    I'm using a Quantal AMDx64 install and a XFX Radeon HD5850 video card. I first enabled restricted drivers through additional drivers, but it resulted in breaking Unity and Compiz (I can only see my wallpaper and shortcuts. But the terminal still works and Nautilus too, however, without Close/Maximize/Minimize and slower). Then I uninstalled it and everything went back to normal. Then I installed it via terminal (12.10 version), and the result was the same. Then I downloaded it via ATI's web site (12.11 beta) and installed the .run file using the terminal, but the result was yet again the same. Then I went to the terminal and entered these commands: sudo apt-get remove --purge fglrx fglrx_* fglrx-amdcccle* fglrx-dev* - It said it had nothing to uninstall sudo rm /ect/x11/xorg.conf - No such directory sudo dpkg-reconfigure xserver-xorg sudo startx sudo cp/ect/x11/xorg.conf.orig /ect/x11/xorg.conf - Also, no such directory sudo aticonfig --initial sudo reboot Then, I was presented with the log in screen, but when I tried to login (with my account), it flashed a black screen and then threw me back. Guest account still works (without unity and compiz, tough) and I can still use TTY. And I also got the "AMD Testing Only" watermark. Then I figured that I should stop messing with the terminal and get help before I unleashed Apocalypse XD. Side notes: My Ubuntu is installed on a ext4 partition with 60GB, and I dual boot with Windows 7 (at least for now). My internet is a 50kbps 3G-ish, so downloading even small files is a pain, let alone a video driver. I would rather not reinstall the O.S., it was a herculean task to download everything I had in there, and I have very little free disk space for backups. I'm still new to Ubuntu (I know some basic commands), and I don't know how to debug, so please, be patient XD And using Windows, my internet is even slower (is that possible?), so it kind of leaves a torture aftertaste xD. So, if you guys could answer quickly, it would be greatly appreciated. Thanks in advance. If you need any info, just ask (and explain how to get it XD).

    Read the article

  • High system cpu load (%sys), system locks

    - by Mark
    For the last two weeks we are having intermittent severe spikes in system cpu usage (shown as %sys), which last for maybe half a minute, locking most processes, including ssh. I've been trying to figure this out, but atop doesn't show anything relevant (system usage for processes it shows is insignificant), spikes are intermittent and I could not reproduce the spike using any workload for the web application this webserver hosts. If you have any ideas on how to debug high %sys and (sometimes) %si CPU usage, please share them. System specs (don't know if any of this is relevant): Dedicated server, CentOS 6, core i7 950, consistent 4 to 8 GB RAM free at any time, hard drives are in RAID-1. Additional info: dmesg output doesn't change between spikes /var/log/messages doesn't change between spikes Here is cat /proc/vmstat Here is output of mpstat 1 during a typical spike Add 07.11.11: looks like simple reboot restored system state, and we might never know what caused the disturbance in first place.

    Read the article

  • Run BGInfo At Startup For All Users

    - by slickboy
    I have a Windows 7 image which I intend to deploy across a business. For simplicity I intend to install BGInfo on each machine and have it update each time a user logs in. From what I can see, when BGInfo creates a configuration file, the file contains variables which are local to each account - and therefore the configuration file will only work on the user account that created it. Has anyone any idea as to how make these configuration files 'generic' so that BGInfo will work for all accounts when they log in? At present I have the BGInfor application and a BGInfo configuration file saved on the C drive and I have written a batch file which is stored in the 'All Users/Start Menu/Startup' directory (which executes every time any user on the computer logs in), however this only works for the account which created the configuration file. Thanks for any help.

    Read the article

  • website lookup extremely slow in ubuntu

    - by ubuntulover
    Hi I have a wireless broadband connection through a router and wireless modem. Everything works fine in Windows. However, in ubuntu on the same machine, websites seem to take longer to start loading. I think the dns lookup is slow. I think https sites may be slower, as Ijust can't log in to gmail. I am also using a mercurial repo with remote origin, and it takes forever (like 5 minutes) to push one small change. I think it is because it has to communicate through https multiple times. Should I change my dns server? I've seen that I don't have these problems at my work network (they have another dns server). This happens with the IPv4 settings being automatic (dhcp). When I change it to automatic (dhcp) addresses only, and add google's 8.8.8.8 in the dns servers, it still takes forever. Why is this happening?

    Read the article

  • Nvidia driver on Windows 7 causing black screen

    - by inKit
    I have just installed Windows 7 on a desktop machine and for the first time ever have had a really tough time doing so, its normally a nice smooth install. This time I found that the monitor would simply go black after completing the installation. I tried reinstalling about 3 times and this did not help. After much searching I discovered that it was the nvidia drivers that were playing up with win 7, so i booted into safe mode, disabled the device, then rebooted to complete the installation. Windows 7 now works fine as long as the nvidia 9600 gt video card is disabled. The moment I enable it, the system requires a reboot and the screen will go black before even getting to the log in screen. I have tried downloading the latest driver and installing it manually, I have also tried uninstalling the device and allowing windows 7 to install it itself. Nothing seems to work. any clues?

    Read the article

  • Resize a new database to predicted maximum size

    - by John Oxley
    Currently I have a SQL Server database which is about 2 Gb. I know over the next year it's going to grow to a maximum of about 10Gb. Hard drive space is not an issue in the slightest. Is there a down side to resizing the datafile to 20Gb now, then defragmenting the hard drive? Should I resize the log file to 1Gb as well? Something ridiculously large so that fragmentation doesn't happen there either. With this question I would like to avoid the datafile becoming fragmented on the disk itself, but I don't want to negatively impact performance.

    Read the article

  • Change domain password from non-domain computer (AD)

    - by Josh
    I have a domain controller on Windows Server 2008. When I set up my users, I gave them all a dummy password with the "must change on next login" checked. Everyone's machine is all on the same network as the domain controller, but we are not forcing them to join their computers to the domain. The DC has a website which requires the use of domain accounts to access it. How do I tell my users to change their domain passwords without connecting their PC to the domain or making them log in to a machine on the domain? I do not want anything I will have to install on each client to allow them to change their passwords (I have a password expiration policy). Most of these workstations are XP.

    Read the article

  • Solaris NFS: user permissions

    - by cjavapro
    I am very new to NFS. I would like to make sure I am clear. If the NFS server shares a directory rw,, and all the files in the directory are permissions 700 and user/group for those files is root/root,,, On the client you would have to log in as root to see it. Is this correct? I am aware that a non root user on the client could make a direct connection to override this. (as in don't use the mount, just use an NFS client hack.) It really seems like anyone who has access to the client machine should have access to the files and that the client machine should be ignoring permissions. Only the server should handle permissions. Am I correct in my understanding? Is it normal to have this type of layout? Is there a way to ignore the permissions on the client side?

    Read the article

  • Can't boot to Windows 7 after installing Ubuntu 11.10

    - by les02jen17
    Here's what happened: I have 2 HDDs. 1st HDD is partitioned like this: C - Windows 7 D* - Empty drive where I installed Ubuntu E - Personal Files F - Personal Files 2nd HDD is partitioned like this: G - Personal Files *the D partition is originally part of the C partition. I resized it (using Easus Partition Master in Windows) and defragged it prior to installing Ubuntu. I installed Ubuntu by booting to the Ubuntu Secure Remix CD, and chose the D partition to install Ubuntu. I did not create a swap drive, and I mounted the / to the D partition. I didnt know where to mount the others, so I just thought by mounting the / to D, it would be okay. After the long installation, upon rebooting, I can't access Windows AND Ubuntu. I get an infinite bootloop and eventually the choices to boot to Safe Modes, Last Known Good Configuration and Start Windows normally. After failing in all of them, I placed the CD back and ran the Boot Repair. I chose the MBR 1st, it didn't work. I then chose the GRUB 2nd and now I was able to boot to the Ubuntu I installed, but not to my Windows 7! I'm using my newly installed Ubuntu while writing this. I hope you can help me. I did the best I could! Here's the link to the boot repair log: http://paste.ubuntu.com/919354/ Thanks in advance!

    Read the article

  • Send nginx X-Accel-Redirect request from remote server

    - by phingage
    I have 2 server first (domain.com) is a django/apache server, second (f1.domain.com) is a file server (nginx) where some files are protected and should be allow download only to registred user, so i have setup a nginx server with a server { listen 80 default_server; server_name *.domanin.com; access_log /home/domanin/logs/access.log; location /files/ { internal; root /home/domanin; } } and from django I send a request via X-Accel-Redirect header, but dosen't work i think because come from a remote server, how can i accomplish my task? regards!

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >