Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 342/457 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • APC fragmention woes on Apache AWS EC2 Small instance with WordPress and W3TC

    - by two7s_clash
    AWS EC2 Small instance, Apache 2 running WordPress and W3TC. Within an hour, my APC fragmentation hits 100%. My APC settings are: apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 100M apc.optimization = 0 apc.num_files_hint = 512 apc.user_entries_hint = 1024 apc.ttl = 7200 apc.user_ttl = 7200 apc.gc_ttl = 3600 apc.cache_by_default = 1 apc.use_request_time = 1 apc.filters = "apc\.php$" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 2M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.rfc1867 = 0 apc.rfc1867_prefix = "upload_" apc.rfc1867_name = "APC_UPLOAD_PROGRESS" apc.rfc1867_freq = 0 apc.localcache = 0 apc.localcache.size = 256M apc.coredump_unmap = 0 apc.stat_ctime = 0 apc.canonicalize = 1 apc.lazy_functions = 0 apc.lazy_classes = 0 /etc/php.d/apc.ini More poop can be seen here. Mostly cribed settings from here. The shm was meant to be whittled down from such a high value after some observation, but apparently such a large value isn't even high enough.... I found an similar question/answer here. I do have some virtual hosts setup, but they aren't being touched much at all. Having users logged into the admin panel of WP does make things worse, but that's certainly not the main culprit. The question asker seems to suggest that it turns out W3TC is probably causing the problem, which the plugin author seems to agree with, but there aren't any helpful details beyond that. Why is it causing the problem? Do I just take it for now and turn off object caching with APC? Is there nothing I can do? Does having it turned on without being used for object caching actually help anything? Would memcache be an ok substitute just for object caching here? Finally, maybe I just shouldn't worry so much about the fragmentation?

    Read the article

  • Unconvert Text File from Binary Format

    - by Hammer Bro.
    I've got a rather large CSV file (~700MB) which I know to consist of lines of 27-character alpha-numeric hashes; no commas or anything fancy. Somehow, during its migration from Windows to Linux (via winSCP and then a few regular SCPs), it has converted into some kind of binary format I am unfamiliar with. If I open the file in vi, everything appears fine, and it says [converted] at the bottom, although I know it's not a line endings issue (and dos2unix doesn't help). If I 'head' the file, it looks proper except for a "ÿþ" at the beginning of the first line. If I open up the file in nano, however, I see the "ÿþ" at the start and then "^@" before every character (even newlines and EoF). If I try to re-save or copy the file (say via: head file.csv short.txt), this special encoding is preserved. I copied the first ten lines out of vi (which displays it properly) into my Windows clipboard via my SSH client, then pasted it into a new text file, test.txt. This file is visually identical when opened in vi (and similar through 'head', minus the "ÿþ"), although it's roughly half of the filesize. Additionally, file test.txt test.txt: ASCII text file short.txt short.txt: I have no idea what format this once-text file got converted to (it's notoriously hard to search the internet for symbols), but surely there must be some way to convert it back. Any ideas?

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • Throttling Postfix memory

    - by teddybeard
    I have a VPS on 1and1 similar to this configuration (512MB, burst up to 2GB). I run a web service where I crawl the web and notify my users through email and sms when a certain online data feed changes. When I send the emails out, I just have PHP loop through the recipients list and send the emails out using the mail() function. Whenever I try to send a large volume of these messages out, my server starts acting funny. I can't even run an 'ls' sometimes because the shell tells me it 'cannot allocate memory'. The shell is unusable and yet my website is being served up fine. Mail.err contains: Nov 14 17:30:09 s15351477 postfix/smtp[26000]: fatal: inet_addr_local[getifaddrs]: getifaddrs: Cannot allocate memory Nov 14 17:30:09 s15351477 postfix/sendmail[25999]: fatal: username(1000): unable to execute /usr/sbin/postdrop -r: Success Nov 14 18:29:14 s15351477 postfix/smtp[9911]: fatal: inet_addr_local[getifaddrs]: getifaddrs: Cannot allocate memory Nov 14 18:29:14 s15351477 postfix/sendmail[9910]: fatal: username(1000): unable to execute /usr/sbin/postdrop -r: Success Also, if relevant, my bean counters are: Version: 2.5 uid resource held maxheld barrier limit failcnt 53907331: kmemsize 20779422 21041560 31457280 34603008 2989403 lockedpages 0 0 512 512 0 privvmpages 81488 82498 524288 576716 94640 shmpages 2831 2831 32768 32768 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 90 91 128 128 6603 physpages 32692 33531 2147483647 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 32942 33781 9223372036854775807 2147483647 0 numtcpsock 22 23 720 720 0 numflock 27 28 376 413 0 numpty 1 1 32 32 0 numsiginfo 0 1 512 512 0 tcpsndbuf 425888 441064 3440640 5406720 0 tcprcvbuf 369200 376832 3440640 5406720 0 othersockbuf 268000 268464 2252160 4194304 0 dgramrcvbuf 0 8472 524288 576716 0 numothersock 180 182 720 720 0 dcachesize 952146 966231 5242880 5767168 0 numfile 3609 3683 8192 8192 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 25 25 200 205 0 Is there some way I can throttle postfix to keep it from swamping the system like this? Also wondering: why does email use so many resources, these emails are just short text?

    Read the article

  • Install Ubuntu on a new netbook?

    - by torbengb
    I'm planning to buy a netbook, and I am considering to install Ubuntu on it. It would mainly be used by my wife at home for web browsing and email at home (lightweight is important, hence a netbook!), and we'd occasionally bring it along on travels (mostly as digital photo dropzone). I want to use Ubuntu instead of Windows because I'm sick of all the Windows hassle and updates. I'm not concerned about Windows applications; I'd switch to native alternatives as far as possible because really only Firefox and something like Picasa are needed. I'm considering an ASUS Eee PC 1001P or an MSI Wind U100 or an Point of View Mobii II (click the links for specs; nevermind that the rest is German). I'm not in the USA. Whatever I buy will most likely have Windows 7 on it but no optical drive. I would also buy a large-ish USB stick but not an external optical drive. Should I (and can I) install Ubuntu alongside Windows 7, or remove Windows? If I remove Windows first, how would I be able to reinstall it if I change my mind? Can I make a backup? Is a recovery CD usually provided? Should I choose the regular Ubuntu, or the Ubuntu Netbook Remix (UNR)? Does UNR allow me to install additional applications just as easily? Note: I'm asking about Ubuntu vs. Windows; let's skip the hardware discussion for now. Edit: I'm assuming that Windows is already installed; if it isn't then I would only install Ubuntu and this question is irrelevant.

    Read the article

  • How to replicate Lynda.com transitions in my own presentations (Keynote preferred, Powerpoint possib

    - by jdmuys
    I recently noticed that lynda.com uses in their training videos a very interesting transition that I would like to replicate in my own presentations. They are somewhat similar to the computer screen animations in the "Minority Report" movie: they zoom out to a huge sheet, pan, and zoom back in to a new location. It's very good for navigating deep hierarchies. For example when you are at level 1, you can see level 3, but it's tiny and unreadable. You then transition by zooming in to a deeper level that suddenly becomes legible. I can imagine also to use it using a large "mind map" as a common Ariane thread, to dive into details (and back out) while still taking advantage of the visual memory of the audience. It a bit difficult to describe, I hope I managed to convey the idea. You can see them in action for example there (iPhone programming training): http://www.lynda.com/home/DisplayCourse.aspx?lpk2=48369 Then click on "What you should know" in the introduction chapter. See for example in that movie between 22s and 24s, then between 27s and 28s. How do they do that? How could I replicate that effect? TIA

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • How to inspect remote SMTP server's TLS certificate?

    - by Miles Erickson
    We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Is there a way to tell what the download speed is from a site/server

    - by Memor-X
    i'm looking into which ISP i should go with at the place i'm moving into, one ISP which i have been told good things about has data limits (which when breached will drop your speed to dial-up speed) but multiple memberships which, apart from the cheapest membership, have the same data limits (the cheapest has a 10GB data limit) in their fine print, they say that each different membership has different port speeds, one particular part jumps out at me These speeds are the NBN (National Broadband Network) port speed and not the actual Internet data speed which will vary based on numerous factors including destination you are reaching, your network equipment, network congestion etc. i plan to use the net to download DLC and patch updates for games (particular the insanely large update for the Wii U) and games from Steam (if i find any good one other than this one JRPG) and downloading development resources from free sites like Deposit Files and Mediafire since one membership with a 1000GB data limit is $145 with the port speed being 12Mbps/1Mbps (cheapest) while another with the same limit is $190 with the port speed of 100Mbps/40Mbps (expensive) i am wondering how i can tell what the speed coming from site is since i don't want to be wasting money on speed that makes no difference (unlike memory which i rather have to spare) NOTE: the speeds are for a fiber optic network which where my new place is can only connect via fixed wireless which i may not be able to get with this ISP but if i can get this network then good NOTE 2: most of the resources i get from Deposit Files are always about 200 MB or less, if a resource pack is greater then it's split into multiple archives (like .7z.part) while Mediafire i have to see one bigger than 150MB NOTE 3: one update patch for a PS3 game is close to 4 GB (Disgaea 4) which i need to get access to the DLC and on the weekend i downloaded 5 GB for the Final Fantasy XIV Open Beta for the PS3 which took almost 5 hours

    Read the article

  • SCCM 2012 - some remote clients unable to download some applications, 401.2 error

    - by growse
    I've got a small SCCM 2012 deployment with about 35 clients attached. Most of these clients are in the same network as the single SCCM host, but three are about 1000 miles away. Oddly, these three clients have stopped being able to download some application packages over BITS. Publishing a new package works for all the other clients, but for these three it never seems to download. If I go to the software centre, it just hangs at "0% downloaded". On the client, the DataTransfer.log says (repeatedly): CDTSJob::HandleErrors: DTS Job '{2DCBBB4C-6D84-479A-9218-885B72C834B9}' BITS Job '{E78147DD-4A26-4942-B4FD-6EC3EB77EECD}' under user 'S-1-5-18' OldErrorCount 442 NewErrorCount 443 ErrorCode 0x80072EE2 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) CDTSJob::HandleErrors: DTS Job ID='{2DCBBB4C-6D84-479A-9218-885B72C834B9}' URL='http://sccm-host:80/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1' ProtType=1 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) Cas.log says (repeatedly): Location update from CTM for content Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 and request {AD041FCB-03D2-4FE6-A6FA-38A6B80FB2A1} ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download location found 0 - http://lonsbrndsccm02.mcs.int.thomsonreuters.com/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download request only, ignoring location update ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) On the server, I've enabled failed request log tracing. The raw IIS log says the following: 2012-07-30 08:28:42 10.13.111.35 GET /SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1/sccm /NSCP-0.4.0.172-x64.msi 80 - 10.2.27.19 Microsoft+BITS/7.5 401 2 5 293 Which is a 401.2 error, meaning access denied. The failed request log is large, but the punchline is that it chucks out a Unauthorized: Access is denied due to invalid credentials. message. All clients are members of the same domain and appear to be (otherwise) working great. I've re-installed the SCCM client, deleted and re-added the computer to SCCM. Some other packages seem to work fine, the daily anti-malware delta gets downloaded and patched without issue. Why are these packages failing?

    Read the article

  • Virtual Win XP Mode stopped HP LJ Pro M1212nf MFP printing in Win 7 Pro

    - by Dee
    Virtual Win XP Mode stopped HP LJ Pro M1212nf MFP printing in Win 7 Pro: I am running Windows 7 Pro with Virtual Windows XP Mode. My printer is HP LaserJet Pro M1212nf MFP attached directly to a USB port of the computer. This printer was working fine in Windows 7, until I tried to attach the printer to the Virtual Windows XP Mode in order to load the printer driver in the Virtual Windows XP Mode. At that point, the printer disappeared from the list of USB devices on the toolbar at the top of the window of the Virtual Windows XP Mode. After installing the printer driver in the Virtual Windows XP Mode, the printer did not work in that mode and also no longer worked in Windows 7. In Windows 7 and in the Virtual Windows XP Mode, print files are sent to the print queue, but are never printed. In Windows 7, the print queue states that the printer is offline. In the Virtual Windows XP Mode, the printer can be toggled from "Print Offline" to "Print Online", but no print files are ever printed from the print queue. The printer acts as though it is no longer connected to the computer, even though it is still physically connected to the USB port of the computer. How can I get the printer to work again in Windows 7? (At this point, I am no longer interested in using the Virtual Windows XP Mode.) I have tried a large number of things to find and fix the printer problem, but have had no success. Device Manager cannot see the printer even though it is physically connected via USB port (have tried different USB ports) to the computer. Restoring Win 7 and Virtual Win XP Mode to times before the problem does not fix the problem. How can I get the computer to see the printer, so that I can print again in Win 7?

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated. Edit: These are the kernel errors we see: 3ware 9000 Storage Controller device driver for Linux v2.26.02.012. 3w-9xxx 0000:09:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 3w-9xxx 0000:09:00.0: setting latency timer to 64 3w-9xxx: scsi0: ERROR: (0x06:0x000D): PCI Abort: clearing. 3w-9xxx: scsi0: ERROR: (0x06:0x001F): Microcontroller not ready during reset sequence. 3w-9xxx: scsi0: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx 0000:09:00.0: PCI INT A disabled

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >