Search Results

Search found 298 results on 12 pages for 'warren j thompson'.

Page 6/12 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What is the value/cost of enabling "spread spectrum clocking" on my hard drives?

    - by Stu Thompson
    I'm building up a biggish NAS box (10x WD RE4 2TB SATA RAID10) and ran into some problems. During the course of my research, debugging, investigations, etc, I discovered a jumper on the physical drives labeled "spread spectrum clocking". After some googling about this on teh internets, it seems to be a feature that some suggest (without reference or explanation) enabling in 'a storage configuration' that makes the drive less sussesptable to EMI. But why? I've got three core questions: Why is this feature not enabled by default? What are the actual benefits? Are there any costs?

    Read the article

  • Why does F@H not bind to more than one core on Windows?

    - by warren
    I have been contributing to Stanford's Folding@Home project for some time with most of the computers I own. I just installed the Windows client on a new machine running Windows 7, but see that the F@H process only binds to one CPU core. Is this due to it being run on Windows? (I have the 64-bit edition of Windows 7 installed.) On the Mac and under 64-bit Linux distros, it will run across all available CPU cores.

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

  • Can't `mount -o loop` an ISO from an NFS share (RHEL)

    - by warren
    I have a large NFS share with a variety of software ISOs on it. I've only tried this on Red Hat Enterprise Linux, but when trying to do the following, the mount comes back with an error indicating no permissions to mount. Why would this be happening? NFS is mounted thusly: mediaserver:path/to/isos /media/nfs This is the mount call that fails mount -o loop /media/nfs/product.iso /tmp/product If I copy the ISO, there is no issue. The NFS share is mounted rw. How can I loop mount the ISO from the NFS share without copying it first?

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

  • What is the "real" difference between a NAS and NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over standard ethernet, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over NFS on servers?

    Read the article

  • Allowing directory view/traversal for a specific VirtualHost in Apache 2.2

    - by warren
    I have the following vhost configured: <VirtualHost *:80> DocumentRoot /var/www/myvhost ServerName myv.host.com ServerAlias myv.host.com ErrorLog logs/myvhost-error_log CustomLog logs/myvhost-access_log combined ServerAdmin [email protected] <Directory /var/www/myvhost> AllowOverride All Options +Indexes </Directory> </VirtualHost> The configuration appears to be correct from the apachectl tool's perspective. However, I cannot get a directory listing on that vhost: Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. The error log shows the following: [Wed Mar 07 19:23:33 2012] [error] [client 66.6.145.214] Directory index forbidden by Options directive: /var/www/****** update2 More recently, the following is now kicking-into the error.log: [Wed Mar 07 20:16:10 2012] [error] [client 192.152.243.233] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/error/noindex.html update3 Today, the following is getting kicked-out: [Thu Mar 08 14:05:56 2012] [error] [client 66.6.145.214] Directory index forbidden by Options directive: /var/www/<mydir> [Thu Mar 08 14:05:56 2012] [error] [client 66.6.145.214] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/error/noindex.html [Thu Mar 08 14:05:57 2012] [error] [client 66.6.145.214] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. This is after modifying the vhosts.conf file thusly: <VirtualHost *:80> DocumentRoot /var/www/<mydir> ServerName myhost ServerAlias myhost ErrorLog logs/myhost-error_log CustomLog logs/myhost-access_log combined ServerAdmin admin@myhost <Directory "/var/www/<mydir>"> Options All +Indexes +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> What is missing? update 4 All subdirectories of the root directory do directory listings properly - it is only the root which cannot.

    Read the article

  • Is the decision to use SNI or IP based SSL made during cert purchase or cert installation?

    - by Neil Thompson
    It's time to renew an SSL cert - but the website will soon be moving from a dedicated machine with a fixed IP to a cloud based host behind a load balancer. When I renew or re-purchase my ssl cert do I make the decision about whether it should be an SNI / IP based SSL Cert at the point of purchase - or is a cert a cert and it's all about where and how it's installed? I'm hoping the renewed cert can continue to be IP based for now, and in a few months when the website (and it's domain ofc) moves to the cloud I can re-use the cert in 'SNI mode'

    Read the article

  • Open Directory authenticated bind succeeds, but creates incomplete record

    - by Jay Thompson
    I have about a dozen Macs running 10.6.7 or 10.6.8, which are all failing to bind properly to my new 10.7.4 Server OD. I can bind them just fine via Directory Utility or dsconfigldap, and it reports success. However, when I look at the record, it is failing to write the MAC address. Even if I manually update the record with the MAC address, MCX doesn't do anything and clients can't log in to OD accounts. All of the affected clients have hundreds of lines in the /Library/Logs/DirectoryService.error.log like so: 2012-09-15 22:23:18 EDT - T[0x00007FFF70292CC0] - GetMACAddress returned 0x *** bad control string *** 8x I do know that all of these clients were previously managed with the Guest computer account, and I also know that they were all imaged with a DeployStudio image when they were purchased. I've tried dscacheutil -flushcache, but after that I'm drawing a blank. Google has a few hits, but nothing very helpful. Re-imaging would be ideal but probably isn't going to happen. Anyone come across this before?

    Read the article

  • Changing filesystem types "safely"

    - by warren
    Back in Windows 95 OSR2 (I believe), there was a conversion tool that would take your extant FAT16 partition and change it to FAT32 non-destructively (most of the time). Are there any tools like that now for going from one file system type to another in situ without destroying the data? For example, from etx3 to ext4? Or NTFS to XFS?

    Read the article

  • Is it possible to modify/rebuild an rpm without the srpm?

    - by warren
    I have an rpm for which I need to change the preinstal scriptlet for testing. However, I do not have the SRPM from which is was built. Is it possible to change the scriptlet and/or rebuild the rpm without having the SRPM? If so, how? I've tried using Midnight Commander (mc) to open the rpm as a directory structure and edit the contents, but even with 444 permissions, it won't let me save any changes.

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • New to server admin, Diagnosing Memory and CPU issues on DV

    - by G Thompson
    Sorry for my ignorance and lack of knowledge. I'm a PHP/Front-end developer just now venturing into very minor server management/diagnostics. I have a Media Temple DV account. I have 2 sites that run a PHP script through a subscription service to an API. Basically API hits site with said script. Script runs, gathers data from api, saves data to SQL database. I noticed that these sites seemed to causing memory overages on my server (not sure why). So I temporarily disabled them. The memory overage alerts stopped but my CPU still sits really high, like at 115% and above. I'm trying to diagnos this with tutorials and resources but just can't seem to find a solution. I'll attach screenshots(screenshots are without the PHP scripts I assume are responsible for the memory issues) I'm assuming are important to the diagnosis, but if anyone can point me in the right direction to start A. figuring out if and why the PHP script may be causing memory overages and B. Why my CPU is always over 100%. Thanks guys! Links to screen shots... can't post with low points. http://i.stack.imgur.com/A64k4.png http://i.stack.imgur.com/qm1rV.png

    Read the article

  • What filesystem comes closest to matching NTFS for support of ACLs, and highly-granular permissioning?

    - by warren
    It seems that most other filesystems handle the basic *nix permissions (ugo±rwx), with maybe an addition here or there. Or can be "made" to handle ACLs through the use of other tools on top of the system. On the wikipedia pages about filesystems (http://en.wikipedia.org/wiki/List%5Fof%5Ffile%5Fsystems & http://en.wikipedia.org/wiki/Comparison%5Fof%5Ffile%5Fsystems), it appears that while some do support extended meta-data, none support natively the level of permissioning that NTFS does. Am I wrong in this understanding?

    Read the article

  • What is the "real" difference between a NAS and NFS? Or, why pick a NAS device over "mere" NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Exclude a process from being listed in `top`

    - by warren
    Is it possible to exclude some processes from being reported by top? For example, I would like to exclude itself from its listing (ie, I don't want top to show in the process list). I would also like to be able to exclude processes that do not belong to the user running top (except for root). Is this possible? If so, how? If not, is there a similar tool that will do what I want (that does not involve running something like ps frequently).

    Read the article

  • wifi key masking

    - by Warren Bullock III
    Hello, We currently utilize wifi access in some of our buildings, we are not using RADIUS at this point, but we are using WPA2 with PKI, the issue has recently come up that we want to keep our key private so we generally setup access for our users providing them the wifi key. The problem is that windows seems to give the option to go back into the wireless properties and unmask the PSK. We need to resolve this ASAP is there a way to make certain that the PSK remains masked regardless even if your logged in as a local administrator to the machine? Thanks in advance.

    Read the article

  • Is it possible with Google searches to ban any and all results from a domain? [closed]

    - by Stu Thompson
    Is it possible to configure Google somehow to permanently ban search results from domains that I know 100% are never, ever going to make me happy? Something cookie/session based maybe? E.g. I want to ban (permanently, forever and always) results from experts-exchange.com. Every time I click results that take me to their page I just want to scream. Update! Google has released a Chrome Extension to allow users to block individual site from Google search results! Personal Blocklist (by Google). (Since this question has been closed, I cannot answer it.)

    Read the article

  • Adding new virtual disks to a Windows host in ESX “live”

    - by warren
    Building on my previous question about RHEL, how do you add get the guest OS to recognize that you've added new drives to it without a reboot? I have Windows 2003, XP, and 2008 guests running on ESX 4. I've added new virtual disks to the VM, but have not figured-out how to get the guest to recognize them without a reboot. Is this possible? If so, how?

    Read the article

  • Useful keyboard shortcuts on a Mac

    - by warren
    Most Mac OS users know about ⌘-Q for quit. And ⌘-C, ⌘-X, ⌘-V for copy cut and paste. Likewsie ⌘-P for print and ⌘-S for save. Is there a keyboard shortcut for maximizing a window? Or minimizing it? Are there other daily-use keyboard shortcuts you use on your Mac?

    Read the article

  • Windows 8 power-shell "update-help" is failing. Does anyone know how to fix it?

    - by Warren P
    Windows 8 includes PowerShell out of the box, but not the help. To get the help you run PowerShell as administrator and type "update-help". I get this error: > update-help update-help : Failed to update Help for the module(s) 'BitLocker, NetWNV' with UI culture(s) {en-US} : The value of the HelpInfoUri key in the module manifest must resolve to a container or root URL on a website where the help files are stored. The HelpInfoUri 'http://technet.microsoft.com/library/cc732148.aspx' does not resolve to a container. At line:1 char:1 + update-help + ~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (:) [Update-Help], Exception + FullyQualifiedErrorId : InvalidHelpInfoUri,Microsoft.PowerShell.Commands.UpdateHelpCommand Can anyone tell me how I fix this or if it's not important? I'm guessing that if I don't need help on NetWNV or BitLocker, that this is the only thing wrong?

    Read the article

  • Is it possible to recover from the Windows 8 refresh feature?

    - by Warren P
    The intention of the Windows 8 RTM (released version) Refresh feature is to restore the system to the way it was when I first installed. It didn't though. Almost everything that came in the start-screen (not a menu any more) is gone, not just third party apps I installed, but EVERYTHING other than the icon for internet explorer, and the icon for the store, and the desktop, were wiped. Out of the box Windows 8 had a pretty large list of things installed, and it seems that the Refresh feature wipes all of them out. Is it possible to really get the system back to a fresh install state, or should I just re-install from the DVD I made? (I have access to Windows 8 RTM, legally through the MS Action Pack subscription.) I suspect that if I create a new account, I might get a new desktop with a default set of icons, but I'm hoping it might be possible to do this without using a different login. The second problem with Windows 8 refresh is it seems to trash the ACL's on my C: drive, taking all permissions away for access to the C: drive, making nothing on it visible or readable to my primary and only login account. I believe it might be possible to undo the damage done by the "refresh" with some judicious use of icacls from the command prompt.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >