Search Results

Search found 11601 results on 465 pages for 'bulk action'.

Page 176/465 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Ensure Mac's get correct machine name from DHCP?

    - by Greg Whitfield
    I have a problem in our network where our Mac's occasionally get given the wrong machine name while, I guess, getting a new DHCP lease. The DHCP servers are Windows based - the bulk of our network is Windows, but we have some Linux machines and an increasing number of Macs. The problem specifics is that occasionally a Mac will take on the name of another machine in the network. For example, I have a new Macbook Pro. In the OSX setup is gets called "gomez", and initially starts up on the network with that name without any problems. But after a few days when the machine was restarted (it had several restarts in the meantime), it ended up being called "florrie", which is actually the name of another machine in another part of the network. All network ops work fine, and indeed you don't notice most of the time - it's only when you run apps like Perforce that require the hostname that you get problems. I'm sorry I don't have more info than that, but if I know what to look for I can dig out some more facts. Or any hints on checking the network setup would be useful.

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help!

    Read the article

  • Windows 2008 R2 Task Scheduler Failure

    - by Jonathan Parker
    I have an application (.exe) which I am running via a scheduled task on Windows Server 2008 R2. The task runs fine but when the .exe returns a non-zero exit code the task is still successful when it should fail. I get this message: Task Scheduler successfully completed task "\CustomerDataSourceETL - Whics" , instance "{a574f6b4-2614-413c-8661-bc35eaeba7cd}" , action "E:\applications\CCDB-ETL\CustomerDataSourceETLConsole.exe" with return code 214794259. How can I get task scheduler to detect that the return code is 0 and fail the task?

    Read the article

  • Install Python 2.6 on Debian Unix

    - by Bialecki
    I want to install Python 2.6, but as it's still experimental for Debian Unix, I'm wondering what might best course of action is. Is the right idea to idea it into /usr/local for my system and then update the python sym link in /usr/bin to point to that version? Other considerations or ways to do it I should be thinking about?

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • VMWare converter performance

    - by bellocarico
    Hello, I have a question about my test lab. It's more to understand the concept more than apply this into production: I have an ESXi with few VMs linux/windows configured and I'd like to use VMWare converter to create backups. To speedup the process I decided to create a Windows VM on the same ESXi host where I've installed Windows 7 and VMWare Converter. The Host has a gigabit card but it's currently connected to a 100Mb FD port. Windows 7 sees a 1gb card connected. When I do the backup using VMWare converter I specify the host IP as source and destination, so I thought the copy could be faster then use my laptop across the network. Well, to cut a long sotry short: I get dreadful performance (4Mb/sec). I'm a buit confused on this because despite the fact that the host is running 100Mb communication between VMs and hosts shouldn't (correct me if I'm wrong) have any limitation instead. I did tweak windows 7 to optimise network performane but I got just a little improvement. i still need 4 hours to back up a 50Gb (thin) VM. Additionally I wanted to ask: Would jumbo frame help in this? I know that jumbo frame have to be supported end to end, and the network switch where the host is currently connected doesn't support this, but I was wondering: 1) Does ESXi host support jumbo frames at all? 2) Can I enable it somehow? 3) If I do so, I guess bulk transfert between VMs and host would improve, but would this affect the communication going through the real switch as this doesn't do jumbo? Thanks for reading

    Read the article

  • Utility to LOGICALLY compare two xml files?

    - by Matthew
    Right now we are attempting to build golden configurations for our environment. One piece of software that we use relies on large XML files to contain the bulk of its configuration. We want tot ake our lab environment, catalog it as our "golden configuration" and then be able to audit against that configuration in the future. Since diff is bytewise comparison and NOT logical comparison, we can't use it to compare files in this case (XML is unordered, so it won't work). What I am looking for is something that can parse the two XML files, and compare them element by element. So far we have yet to find any utilities that can do this. OS doesn't matter, I can do it on anything where it will work. The preference is something off the shelf. Any ideas? Edit: One issue we have run into is one vendor's config files will occasionally mention the same element several times, each time with different attributes. Whatever diff utility we use would need to be able to identify either the set of attributes or identify them all as part of one element. Tall order :)

    Read the article

  • nocheck within admin file for pkgadd still asks questions

    - by romant
    I place the following into an admin file called noask mail= instance=overwrite partial=nocheck runlevel=nocheck idepend=nocheck rdepend=nocheck space=nocheck setuid=nocheck conflict=nocheck action=nocheck basedir=default Then run pkgadd -a noask -d sed-4.1.5-sol10-x86-local - yet am still queried for: 'Select package(s) you wish to process' Is there a way around the questioning without doing an "echo yes" at the front? Thank you

    Read the article

  • Merging cuesheet chapter halves into single track for an audiobook

    - by TheSavo
    I have an audiobook that I have ripped and I need some help constructing chapters. I have already made some cue sheets TITLE "Bookname" PERFORMER "the Author" FILE "File1.FLAC" wave ; 23971906.667 milliseconds TRACK 01 AUDIO TITLE "_Intro" INDEX 01 00:00:00 TRACK 02 AUDIO TITLE "CH 01" INDEX 01 24:15:50 TRACK 03 AUDIO TITLE "CH 02" INDEX 01 66:21:00 TRACK 04 AUDIO TITLE "CH 03" INDEX 01 87:05:00 The audio book is in two files. The chapter at the end of the first file is continued in the second file. However, the second file restates: The publisher Book Title List item Blah blah blah I would like to merge the two 'halves' of the chapter in one seamless track. The only way I can think to do this would be be: Bulk cut down the tracks. Drop the junk info into junk track Continue the track listings as normal Take the two "halves" of the target chapter and build a separate cue sheet for it. I know there has to be an easier way. I am ok with making the 'junk' info a 'gap' or something. These are are FLAC files that will be converted to MP3 for my phone and other potable devices. I have read the primers on cue sheets, but I am just not getting it.

    Read the article

  • How can i find the trigger of an acpi event ?

    - by n00ki3
    My Server shutsdown . Evertime at midnight. The acpi Event power_button is triggered. at /etc/acpi/events/power_button power_button: # care about the power button event=button/power.* action=/usr/lib/acpid/power_button How can i find out the "Caller" or the Trigger of this event ?

    Read the article

  • Can I take my ReadyNAS drive in Raid1 and plug it straight into new different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS? Edit: Ok after further reading it seems that the ReadyNAS Duo formats my drive as ext3 in 16k blocks. There are instructions for mounting a drive into a linux box here: Mounting Sparc-based ReadyNAS Drives in x86-based Linux There is also talk about a linux image here: ReadyNAS Data Recovery - VMware recovery tool I'm not sure whether this means they ReadyNAS actually implements software raid under the hood, or what? So it appears like it IS do-able, but do any of you linux guru's know whether this is viable and whether the fact that they are in raid 1 affect matters?

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • Conditional formatting with date and time

    - by Kiran
    I have a problem on conditional formatting with date and time. I hava a cell A1 that has date and time and I want to conditionally format its adjecent cell if the value of cell A1 is greater than 3 days compared to today, then cell A2 should show as "Follow-up Required" and cell colour should turn red no. if cell value of A1 is less than 3 days compared to today, no action is required. Please help. Regards, Kiran

    Read the article

  • Windows 2008 R2 Task Scheduler Failure

    - by Jonathan Parker
    I have an application (.exe) which I am running via a scheduled task on Windows Server 2008 R2. The task runs fine but when the .exe returns a non-zero exit code the task is still successful when it should fail. I get this message: Task Scheduler successfully completed task "\CustomerDataSourceETL - Whics" , instance "{a574f6b4-2614-413c-8661-bc35eaeba7cd}" , action "E:\applications\CCDB-ETL\CustomerDataSourceETLConsole.exe" with return code 214794259. How can I get task scheduler to detect that the return code is 0 and fail the task?

    Read the article

  • Google Chrome equivalent to Firefox microsummary bookmarks?

    - by Sam Hasler
    Is there a Google Chrome equivalent to Firefox microsummary bookmarks? (If there isn't one already I'm guessing it should be possible to automate turning them into Chrome extensions, although the summary would have to be in a popup from a Browser Action icon.) Update: to be specific, I want something to replace my Stackoverflow microsummaries; I'm not talking about the RSS live titles bookmarks.

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • Stop fail2ban stop/start notifications

    - by michael
    If the server is restarted, or even if fail2ban is stopped/start it sends a notification. [asterisk-iptables] enabled = true filter = asterisk action = iptables-allports[name=ASTERISK, protocol=all] sendmail-whois[name=ASTERISK, [email protected], [email protected]] logpath = /var/log/asterisk/messages maxretry = 5 bantime = 259200 Removing the sendmail-whois stops it, but it also stops the ban notifications, how can I get it to stop notifying me when the process starts/stops? Thanks

    Read the article

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • How to Reinstalling MSSQL Server 2008 with SP1? (Windows 7)

    - by user23884
    I am using Windows 7 Ultimate x64. I had earlier installed SQL server 2008 with SP1 with Visual Studio 2008 Team System with sp1. Now that VS2010 is out I wanted to install it so I uninstalled visual studio then MSSLQ Server 2008 SP1 and then SQL Server 2008 as suggested here: h**p://mark.michaelis.net/Blog/SQLServer2008InstallNightmare.aspx But now when I try to reinstall it I am unable to get it right I am getting the ERROR: “Attempted to perform an unauthorized operation.” (Following is part of the log file): 2010-04-16 04:54:57 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: Prompting user if they want to retry this action due to the following failure: 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 04:54:57 Slp: The following is an exception stack listing the exceptions in outermost to innermost order 2010-04-16 04:54:57 Slp: Inner exceptions are being indented 2010-04-16 04:54:57 Slp: 2010-04-16 04:54:57 Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Data: 2010-04-16 04:54:57 Slp: WatsonData = Microsoft SQL Server 2010-04-16 04:54:57 Slp: DisableRetry = true 2010-04-16 04:54:57 Slp: Inner exception type: System.UnauthorizedAccessException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Stack: 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.Win32.GetSecurityInfo(ResourceType resourceType, String name, SafeHandle handle, AccessControlSections accessControlSections, RawSecurityDescriptor& resultSd) 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.NativeObjectSecurity.CreateInternal(ResourceType resourceType, Boolean isContainer, String name, SafeHandle handle, AccessControlSections includeSections, Boolean createByName, ExceptionFromErrorCode exceptionFromErrorCode, Object exceptionContext) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity..ctor(ResourceType resourceType, SafeRegistryHandle handle, AccessControlSections includeSections) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity.Create(InternalRegistryKey key) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.SetSecurityDescriptor(String sddl, Boolean overwrite) 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 10:37:19 Slp: User has chosen to cancel this action 2010-04-16 10:37:19 Slp: Watson Bucket 2 Original Parameter Values 2010-04-16 10:37:19 Slp: Parameter 0 : SQL2008@RTM@ 2010-04-16 10:37:19 Slp: Parameter 2 : System.Security.AccessControl.Win32.GetSecurityInfo 2010-04-16 10:37:19 Slp: Parameter 3 : Microsoft.SqlServer.Configuration.Sco.ScoException@1211@1 2010-04-16 10:37:19 Slp: Parameter 4 : System.UnauthorizedAccessException@-2147024891 2010-04-16 10:37:19 Slp: Parameter 5 : SqlBrowserConfigAction_install_ConfigNonRC 2010-04-16 10:37:19 Slp: Parameter 7 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Parameter 8 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Final Parameter Values I have googled around for the error given error but all I could find is to regedit and reset permissions on certain reg keys but I don’t see any reg keys with access problem in the log file the log file can be download here: http://www.mediafire.com/?dznizytjznn. Please guys help me out here I am a developer and I cannot afford an OS reinstallation! Thanks in advance…

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Including email, IMs, configs, etc. in documentation or notes

    - by Jason Antman
    The shop I work in is pretty laid-back. We're on a documentation kick, only because historically we've been very bad with it. We do a lot of our brainstorming in face-to-face meetings, and also do a lot of communication via IM in addition to email. While I'm usually pretty good about documentation and keeping copious lab notes, I just finished a build of a host and spent hours searching through IMs, emails, files on my workstation, etc. to pull out anything I missed in my lab notes, which formed a large amount of the basis for the internal documentation. Does anyone have any thoughts on, aside from manually saving things to a project directory, managing various data sources (especially email and IM) and tracking them on project basis? Ideally, I'd like an easy way to put copies of emails, IM logs, etc. into a project-specific directory on my workstation and then just have a cron job that syncs that up with a shared folder. This isn't really a candidate for anything more advanced, as the bulk of the data will be copies of configs, code, etc. Here are the big restrictions: Email is via a centralized Zimbra install, so nothing can happen server-side. My workstation is Linux. Aside from writing Pidgin and Thunderbird plugins that let me tag chats and emails as belonging to a project, and then copy them to the appropriate place... any thoughts? Suggestions? Thanks, Jason

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >