Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 646/931 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

  • Why the system information message when accessing an Ubuntu server doesn't match free -m?

    - by Andres
    Each time I SSH into my AWS Ubuntu servers I see a system information message, showing load, memory usage and packages available to install, like this: Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-51-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 10 18:06:43 EST 2013 System load: 0.08 Processes: 127 Usage of /: 4.9% of 98.43GB Users logged in: 1 Memory usage: 69% IP address for eth0: 10.236.136.233 Swap usage: 100% Graph this data and manage this system at https://landscape.canonical.com/ 13 packages can be updated. 0 updates are security updates. Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud Use Juju to deploy your cloud instances and workloads. https://juju.ubuntu.com/#cloud-precise *** /dev/xvda1 will be checked for errors at next reboot *** *** System restart required *** My question is about the memory percentage shown. In this case, it's showing a 69% of memory usage, but since the swap usage was 100% I checked it by myself. So when I run free -m I get this: total used free shared buffers cached Mem: 1652 1635 17 0 4 29 -/+ buffers/cache: 1601 51 Swap: 895 895 0 And that's of course closer to 100% than to 69%

    Read the article

  • Hardware for a home server running Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2

    - by David Hayes
    Hi, I'm planning to build a server to do the following Act as a file server (videos, pictures music) Run Squeezebox server Run Zune Software to allow wireless syncing to Windows Phone 7 I'd also like to aim for Low power usage (i'd settle for less than the 90-100Watts I'm using atm Flexibility, I might want to add a web server or sharepoint or... Something I can learn/test on, work is mainly a Windows shop but I do have Linux experience too I'd like to take a look at App-V (application virtualization) too I'd like it to cost less than $1000 Quiet would be nice but not essential (it'll be in the basement) I'm thinking of getting a technet subscription to get access to Windows Server 2008 R2 at a reasonable price ($199) So my plan was this Get a bunch of 2TB Caviar green drives to RAID up (RAID 1 or 6 probably) Get a Quad core CPU (Intel i5/i7 probably) Install a Hypervisor Install w2k8 R2 Storage Server for a NAS Install Windows 7 Pro to run Zune/Squeeze box Install any other machines I want to play with Questions Can anyone see any issues with this or have any better ideas? Do you think I'd need an i7 over an i5? Is 4 cores enough/too much? Can anyone sugest a nice, reasonably priced case that will hold 6-8 drives and stay cool Should I wait for Sandy Bridge parts?

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • 'pskill \\hostname winlogon' might budge a server "stuck rebooting", but why?

    - by Snoi
    Question: Executing remote (Sysinternals) command... pskill \\machine winlogon ...can budge a server that is stuck rebooting, but how/why does this work? How do you know which service to kill? To recreate (e.g.): You run Windows Update, allow a reboot, and ...NOTHING! RDP gets cut off but the server does not reboot. Just about every other service seems to stay up. Further Background: I've faced this problem on VMs hosted around the planet for some years, and used various sc.exe and shutdown commands to learn the state of and attempt remote reboot of servers in such a state, with limited success. Most datacentres don't offer any way to see the true console or power off/on such machines. They charge $$ for you to call them to do such simple things after hours, when you nearly always have to run your maint tasks. e.g. NET USE \\machine\IPC$ /USER:login password sc \\machine query RpcSs sc \\machine query TermService sc \\machine query wuauserv tasklist /s machine This occasionally works for me... shutdown /m \\machine /r /f /t: 0 ...but more often than not it fails with: A system shutdown is in progress (1115). I found this question, and the answer by @Tweek, and it worked really well, but was I just lucky? Can not RDP to Win 2003 box or initiate remote restart @Tweek said to run: pskill \\hostname winlogon ...and that got me past this situation in a new way (Server 2008 R2 in my most recent case) - really useful! I just need to understand if I got lucky or there is more science here. What I'd like to know is why the winlogon process? @Livne said to use "tasklist /s HostName" to see what is the culprit, but how do you tell from the listed output? It's just a list of running tasks etc. From that I would not know what to look for, nor could I see anything about the winlogon process that suggested to my eyes that was the one to kill.

    Read the article

  • Why isn't MediaWiki loading?

    - by E L
    I recently set up MediaWiki on an Apache server with PostgreSQL. It installed successfully. However, when I try to access the website, I get a blank page. The error log reports the following. [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Warning: require_once(/var/www/mediawiki-1.19.2/LocalSettings.php): failed to open stream: Permission denied in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 I've seen other people with similar problems and the solutions have involved using chmod on LocalSettings.php to 644 or in other cases 755. Others have said using chown to make LocalSettings match the Apache user, which is just 'apache' in my case. None of these solutions have worked for me. Does anyone have other suggestions or maybe I missed something?

    Read the article

  • Catastrophic Failure opening ODBC via Citrix

    - by Joshdan
    We recently had our Citrix server crash unexpectedly. When it came back up, there was a new issue -- every ODBC connection fails with "Catastrophic Failure" (0x8000FFFF). The issue is limited to Citrix / ICA connections; logging in as the same user via RDP works as usual. The following code is my minimal test case (for wscript): ''// test_odbc.vbs strConn = "Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\files\;" Set rs = CreateObject("ADODB.recordset") strSQL = "SELECT * FROM myFile.csv" wscript.echo "Press OK to Test" ''// This line breaks over Citrix, but not over Terminal Services ''// ---------------------- rs.open strSQL, strConn, 3,3 ''// ---------------------- wscript.echo rs("a") Any insight would be greatly appreciated. Windows Server 2003 SP1, Citrix MetaFrame Presentation Server 4.0. Clients include at least versions 10.2-11 running on 2000-Vista, OS X. ODBC error happens whether a DSN is used or not, on at least Access, MS-SQL, and CSV. Connections both through the SSL Gateway and directly. There have been a few users actually able to log in without trouble, but I can't pin down anything special about them.

    Read the article

  • Login box not shown on Ubuntu

    - by Alexandre
    I've installed Ubuntu 10.04 (64bit) as a guest OS in VirtualBox, using Windows7 Professional (64bit) as host. After Ubuntu install, I did installed Xfce4 (sudo apt-get install xfce4). Logged in using a Xfce session, and when I logged out, I couldn't see the login box anymore, only the regular gnome background from login screen. Then I restarted the virtual machine, and now I'm not able to see the login box anymore, only the gnome background. Does someone knows how to solve this? Thanks in advance Update: I've tried to use Xubuntu, that comes with Xfce. And I'm facing the same problem. As a common denominator from the two cases, now I see the problem arises after I've updated the system, and then installed curl (via apt-get), zlib (make process), git (make process). But I'm not sure this can be the cause for this ... Update: I isolated the problem. The bad guy is zlib, but I don't have a clue why. In fact, it does crash not only ubuntu and xubuntu, but also Debian (Yes, I've tested all). I'll keep this question in case someone have the same problem, but I think my initial question is not valid anymore.

    Read the article

  • Migrating from "partial" Exchange 2003 to full Exchange 2003 usability

    - by TheCleaner
    I have a client that is using Exchange 2003 on SBS 2003 R2, but only for calendar sharing and contacts sharing. Their email is still coming to their clients via a POP3 account on each client's Outlook. I'd like to move them over to using Exchange for both email and the other things they are utilizing it for now. Can you folks guide me in the right direction? The setup: external domain is akin to domain.com (and is where they get their POP3 email from now) internal domain is akin to domain.local only simple hardware firewall (no ISA) static external IP is available to use My "assumptions": Setup SMTP default connector in Exchange for their existing external domain Have their existing email backed up to PST files (just in case) Setup the new MX records to point domain.com to the static external IP I'm a little confused how I'm going to setup their existing Exchange accounts with the proper SMTP address though. Right now it is just [email protected]. Do I just need to modify or create a new recipient policy? Are there other steps involved that I'm missing? Anyone with a walkthrough or even a basic "steps" is fine. I'm fairly used to Exchange 03, but I've been on Exchange 07 for a while now so going back is the weird part...plus I don't know what issues Exchange 03 on SBS has versus the normal "version". Thanks for all the help!

    Read the article

  • Weird mouse/keyboard freezups when using PowerPoint 2007 with IBM/Lenovo docking station

    - by DanM
    I'm not sure what part of my system is responsible for this, but when using PowerPoint, I have problems when trying to resize drawing objects. I'll be dragging the handle and suddenly, the object will deselect and whatever is behind the object will select and start moving around. Next thing I know, the keyboard won't type anymore, and the only way to fix it is to unplug the USB and plug it back in. In case it's hardware related, I'm using an IMB Thinkpad T60P in a docking station. My keyboard is a Microsoft Natural Keyboard Pro. My OS is Windows XP SP3. I've never noticed this happening in anything besides PowerPoint, and I don't know anyone else who has this problem (even people with similar setups). Any ideas what it could be? Edit Well, it looks like I only get the problem if I plug the mouse into my docking station's USB. If I plug directly into the laptop's USB, everything works fine. And, again, this problem is only with PowerPoint. I tried playing with some drawing objects in Word and had no issue no matter where my mouse was plugged in. I should also mention I tried a different mouse (a standard Microsoft corded mouse instead of my Logitech trackball), but that made no difference. So, I don't think it's anything specific with the trackball or the trackball's driver. I tried searching Google but came up empty, so I'm guessing this problem is something unique to my setup. If you have any thoughts or ideas to try, I'd love to hear them.

    Read the article

  • Help building maya render node spec

    - by Ak
    Hi there, I'm looking to build 4x Maya render slaves/nodes for a friend of mine when his project gets green lit. The project involves MentalRay and lots of glass. I'm unsure if the new i7's 9xx or 8xx with hyper threading will do any better than a core 2 quad of the same (or close enough) speed. Does hyper threading make a difference to Maya or is it more performance per core based? I'm sure he's prefer I'd build another render node than pay for a bleeding edge CPU that only adds fractionly more GHz. -- The rest of the spec so far: 4Gb - 8Gb ram 64 bit OS: Probably Windows 7 (I know Linux is free, but want to build something my friend can support himself as easily as he supports his own workstation) 1TB HDD to hold textures, Maya files and renders which will be copied to central storage later Mobo with on-board video, gigabit NIC 500 - 650 watt PSU Desktop case something like a: Cooler Master ATCS 840 The machines will sold afterwards if necessary. -- If anyone has had experience in Maya and has done any tests with the new CPUs vs. the older ones I'd really appreciate your input.

    Read the article

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • Recursive move utility on Unix?

    - by Thomas Vander Stichele
    Sometimes I have two trees that used to have the same content, but have grown out of sync (because I moved disks around or whatever). A good example is a tree where I mirror upstream packages from Fedora. I want to merge those two trees again by moving all of the files from tree1 into tree2. Usually I do this with: rsync -arv tree1/* tree2 Then delete tree1. However, this takes an awful lot of time and disk space, and it would be much easier to be able to do: mv -r tree1/* tree2 In other words, a recursive move. It would be faster because first of all it would not even copy, just move the inodes, and second I wouldn't need a delete at the end. Does this exist ? As a test case, consider the following sequence of commands: $ mkdir -p a/b $ touch a/b/c1 $ rsync -arv a/ a2 sending incremental file list created directory ./ b/ b/c1 b/c2 sent 173 bytes received 57 bytes 460.00 bytes/sec total size is 0 speedup is 0.00 $ touch a/b/c2 What command would now have the effect of moving a/b/c2 to a2/b/c2 and then deleting the a subtree (since everything in it is already in the destination tree) ?

    Read the article

  • Get-ChildItem fails to connect in SQLSERVER drive

    - by Norman Kelm
    I'm having some trouble with the SQLSERVER PSDRIVE. See error below. I only have named instances on my PC, both 2005 and 2008 Added the SQL snapins. The PC is named YODA The SQL instance is SQL2008 Navigate to the Databases folder for YODA\SQL2008. You can see the path below. dir -name spits out a connection error trying to connect to YODASQL2008\DEFAULT when it should be trying to connect to YODA\SQL2008. Then it outputs the db name which is Twitter in this case. Is there something missing from my config? Output: PS SQLSERVER:\SQL\YODA\SQL2008\Databases dir -name Get-ChildItem : SQL Server PowerShell provider error: Could not connect to 'YODASQL2008\DEFAULT'. [Failed to connect to server YODASQL2008. -- A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)] At line:1 char:4 + dir <<<< -name + CategoryInfo : OpenError: (SQLSERVER:\SQL\...tabases\Twitter:SqlPath) [Get-ChildItem], GenericProviderException + FullyQualifiedErrorId : ConnectFailed,Microsoft.PowerShell.Commands.GetChildItemCommand Twitter Repeats with error for every database. Thanks, Norman

    Read the article

  • Subversion 1.6 + SASL : Only works with plaintext 'userPassword'?

    - by SiegeX
    I'm attempting to setup svnserve with SASL support on my Slackware 13.1 server and after some trial and error I'm able to get it to work with the configuration listed below: svnserve.conf [general] anon-access = read auth-access = write realm = myrepo [sasl] use-sasl = true min-encryption = 128 max-encryption = 256 /etc/sasl2/svn.conf pwcheck_method: auxprop auxprop_plugin: sasldb sasldb_path: /etc/sasl2/my_sasldb mech_list: DIGEST-MD5 sasldb users $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP test@myrepo: userPassword You'll notice that the output of sasldblistusers2 shows my test user as having both an encrypted cmusaslsecretOTP password as well as a plain text userPassword passwd. i.e., if I were to run strings /etc/sasl2/my_sasldb I would see the test users' password in plaintext. These two password entries were created with the following subversion book recommended command: saslpasswd2 -c -f /etc/sasl2/my_sasldb -u myrepo test After reading man saslpasswd2 I see the following option: -n Don't set the plaintext userPassword property for the user. Only mechanism-specific secrets will be set (e.g. OTP, SRP) This is exactly what I want to do, suppress the plain text password and only use the mechanism-specific secret (OTP in my case). So I clear out /etc/sasl2/my_sasldb and rerun saslpasswd2 as: saslpasswd2 -n -c -f /etc/sasl2/my_sasldb -u myrepo test I then follow it up with a sasldblistusers2 and I see: $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP Perfect! I think, now I have only encrypted passwords.... only neither the Linux svn client nor the Windows TortoiseSVN client can connect to my repo anymore. They both present me with the user/pass challenge but that's as far as I get. TLDR So, what is the point of SVN supporting SASL if my sasldb must store its passwords in plaintext to work?

    Read the article

  • USB Wifi will not connect on Windows 7 (Even though the driver installs OK)

    - by Pete Roberts
    Windows 7 will not connect to a WiFi Netowrk using a USB Network Adapter. I have 3 adapters, A Senoa SUB 364 (EXT), a Repeatit SU2410 USB V2 and a ZYXEL G202. All of these devices install OK on Windows 7 Home Premium on my Destop PC (64 bit) and on my Asus Wii Netbook (32 bit). In each case the adapter can be enabled/disabled and the driver properties says it is working correctly. When I try and connect to a network Windows 7 behaves as though the adapter does not exist and reports no networks. The Wii has an integrated adapter which works perfectly under Windows and connects to either of the 3 networks available to me. I have done all the checks I can on the configuration. What seems odd to me is that it happens to all 3 devices on 2 different windows 7 PCs both of which are working perfectly in any other respect. This suggests the common denominator is me and I must be doing something wrong.. what's also strange is that I cannot find any similar problems being reported on any of the forums. From what reading I've been able to do it seems like the new wifi virtualisation thingy in W7 is not recognising the adapters which suggests I', missing a configuration option somewhere. Looking forward to finding out if I'm not alone or just being stupid. Pete

    Read the article

  • MySQL socket connections working, but not port connections

    - by Neil
    I installed MySQL community 5.1.45 on my Snow Leopard 10.6, using the pkg from their site. I had previously installed a MySQL binary from entropy.ch. In the previous installation, the connections were working fine before I upgrade to Snow Leopard. In Snow Leopard, both the installations are problematic. Using an app called Sequel Pro, if I connect with the socket operation, it connects properly. However, a standard connection with the same credentials doesn't work. From what I've understood, socket connections happen on the machine itself between processes, whereas normal connections occur over the network/ports, in this case a loopback to my machine, since the server and client are both on the same machine. My new CakePHP installation isn't being able to connect to the db with the root credentials I provided. Btw, I've been starting the MySQL server using the Preference Pane. When I tried running mysqld from terminal, it gave me: 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test mysqld: Can't change dir to '/usr/local/mysql-5.1.45-osx10.6-x86_64/data/' (Errcode: 13) 100323 1:54:37 [ERROR] Aborting 100323 1:54:37 [Note] mysqld: Shutdown complete mbp is the name of my machine. How do I fix this so that my webserver can connect to the mysql server?

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >