Search Results

Search found 11897 results on 476 pages for 'dean rather'.

Page 338/476 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • How does one skip “Windows did not shut down successfully” in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Rackspace Ubuntu 12.04 server stuck in initramfs after kernel upgrade

    - by Znarkus
    Can't boot after I did a aptitude full-upgrade and let it update menu.lst (did a diff first and it looked good). This is what I've done so far in the BusyBox shell: mkdir /tmp/xvda1 mount /dev/xvda1 /tmp/xvda1 chroot /dev/xvda1 nano /boot/grub/menu.lst This file looks like this: title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro quiet splash initrd /boot/initrd.img-3.2.0-31-virtual title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual (recovery mode) root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro single initrd /boot/initrd.img-3.2.0-31-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-generic titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-generic titleChainload into GRUB 2 root(hd0,0) kernel/boot/grub/core.img titleUbuntu 12.04.1 LTS, memtest86+ root(hd0,0) kernel/boot/memtest86+.bin From what I remember, the upgrade added the UUID= string. Should I remove these? Or rather, how do I get my system back online again? Thanks. Update: Seems like I can't even edit the file. [ Error writing /boot/grub/menu.lst: Read-only file system ]

    Read the article

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

  • Can't find partition tab in disk utility osX ver. 10.6.8

    - by John W
    I just got a used Mac Book Pro. I created a new admin account and deleted the old one as well as one other user. This is an older late 2007 MBP... the osX upgrade to 10.6.8 was just performed. My Macintosh HD is showing up as Partition 2. I ran disk utility (not from install disk), but there was no partition tab. I have a 160GB drive with only 53GB of space left on it. Since I am the only user and have no files on the laptop yet, I don't understand why there is so little space left. Surely the OS can't use up over 100GB. I wanted to run disk utility to see if there were any recovery partitions or other partition left over from the previous owner that could be erased to make room for expanding the main partition. Unfortunately, there is no partition tab in disk utility. The documentation I have found on line states that this version of osX includes that utility. The osX disks I have are for an older version so I wasn't sure if they would be of any use in solving this problem. Also, I was afraid if using the disks, would I lose the little bit of data/apps that I have assembled. I would rather not do a fresh install and have to do all the updates again to achieve this. The previous owner had some apps that I don't want to lose as I would have to pay handsomely to get them back. Simply, if all the previous users data is backed up on here after deleting user is still taking up space on a recovery partition (that I can't see)... I need to locate it erase it and expand the primary partition to re-aquire disk space for my files. I am new to Mac, so please be as descriptive as possible. Thanks.

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • "Ethernet" doesn't have a valid IP configuration

    - by Xuzuno
    I'm using an ethernet cord to connect to my internet and it has been working well until Thursday morning when I turned on my laptop (Windows 8) to see a yellow triangle sign in the bottom right hand corner, in front of the ethernet connected symbol. Since then I haven't been able been able to access the internet from my computer. When I hover over it, it says that it is an "Unidentified network" and there is "No internet access". I've run the Windows 8 troubleshooting and it says that the problem found was ""Ethernet" doesn't have a valid IP configuration", but I'm unsure how to fix it. I'm thinking that the problem is to do with my computer rather than my network, because I've tried another laptop (Windows 7) through the same ethernet cable and connection and the internet works fine on the other laptop. I've tried so many fixes that I've found online, with none of them actually working. Yesterday I even tried a full system reset, where I re-installed Windows 8, re-partitioned and wiped everything off the hard drive, but it still appears have the exact same problem. Today I also purchased and tried a new ethernet cable which didn't work, so I then purchased a USB to Ethernet adapter, to make sure that it wasn't my ethernet port on my laptop that was faulty. That didn't work either, and the same problem still remains. I feel like I've tried everything, so can someone please help me?

    Read the article

  • How do you apply proxy settings per computer instead of per user?

    - by Oliver Salzburg
    So far, I've used a user group policy object utilizing Internet Explorer maintenance to set a proxy for the user in IE. We have now deployed the Enterprise Client (EC) starter group policy to our domain and this policy affects this behavior. The EC group policy uses the policy Make proxy settings per-machine (rather than per-user). This policy describes itself as: This policy is intended to ensure that proxy settings apply uniformly to the same computer and do not vary from user to user. Great! This seems to be an improvement over my previous setup. If you enable this policy, users cannot set user-specific proxy settings. They must use the zones created for all users of the computer. What zones and where do I configure the proxy settings for them? I assumed the policy would simply take the user settings and apply them, but that's not what's happening. Now no proxy server is set at all. So my previous settings obviously no longer have any effect. So far, I've only come up with solutions that involved direct manipulation of the Windows registry. Which is fine, I guess, but the way the proxy is configured for users makes it appear as if there could be a higher level approach.

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • Apache Server with memcache, varnish and php slow request times

    - by coolestdude1
    My issue is that these servers are taking rather long for request about 2 seconds on average just to serve files. When we had just one server doing everything it was noticeably faster even with the same web app (Drupal 6 and Drupal 7). I want to get this number down to a reasonable level and so I need some help getting to the bottom of why the request times are so slow. This can cause the webapp to hang on post or put and generally leads to a bad user experience on my sites. PS: I am more of a server newbie so this has confounded me for quite some time. The domains: collabornation.net nptrainingworks.com (they run off the same two webservers using vhost configs) The Gear: Two Rackspace 4 Gig servers running CentOS 6.2 Final They have a mounted file system (gluster) that is used to keep files the same on both machines. They are behind a rackspace load balancer running round robin. Mysql is run using php-pdo and php-mysql as such mysql is run on another instance running memcache on that machine with phpMyAdmin located there as well. Apache version number 2.2.15-15.el6.centos.1 (httpd.x86_64) Varnish version number 3.0.2-1.el5 (varnish.x86_64) PHP version number 5.3.14-1.el6.remi (php.x86_64) Configs Linked Below Apache Conf Vhost Conf Varnish Backends Varnish Defaults Varnish Acl PHP INI Again need some help, much appreciated!

    Read the article

  • nginx short urls for mediawiki

    - by William
    I am trying to do short URLs for a MediaWiki site. The wiki is in a subdirectory mydir (http://www.example.com/mywiki). I've already set up rewrites in /etc/nginx/sites-available so that example.com redirects to example.com/mywiki. Currently the URL is like http://www.example.com/mywiki/index.php?title=Main_Page. I want to clean up the url so that it looks like http://www.example.com/mywiki/Main_Page. I am having quite a bit of trouble doing this. I am not familiar with regular expressions or the syntax that the nginx config files use. This is what I currently have: server_name example.com www.example.com; location / { rewrite ^.+ /mywiki/ permanent; } location /wiki/ { rewrite ^/mywiki/([^?]*)(?:\?(.*))? /mywiki/index.php?title=$1&$2 last; } The second rewrite is obviously the one that's broken. It is based off of Page title -- nginx rewrite--root access in the MediaWiki documentation. When I try to load the site, the browser tells me I get infinite redirects. Does anyone who how I should go about fixing this issue? Or rather, what is the correct way to implement this, and what do all those symbols mean?

    Read the article

  • VMWare Server modifying files related to paused VMs, is this expected?

    - by David Spillett
    While refreshing the backup of a VM used for testing, I experienced the following warning from tar: tar: /VMsR0/cli_noddyco_test/VM2K8_32_web.vmem: file changed as we read it The VMs in question were paused at the time. My first though was that I'd mixed up the machines and was trying to backup something that was still actively running. To be sure I unpaused and properly shut down the VM, and the vmem files that tar reported changing vanished as I would expect. Is it normal for VMWare Server to touch or alter files for paused VMs like this, or is there likely something amiss with our setup? If this is expected behaviour, is just touching the vmem file (and so altering the last modification date without actually changing content)? If it is normal for files relating to paused VMs to be updated I shall have to revise our backup procedures to make sure the VMs are fully shut down fully rather than just pausing them (this isn't a problem, but it seems strange and I'd prefer to understand what VMWare is doing and why instead of just dismissing it as "one of those things" and working around it). For further detail: the host in question is VMWare Server version 2.0.2 running on 64-bit Debian/Lenny, and that VM did not have a snapshots at the time. We have backed up paused VMs this way in the past with no such warnings from tar.

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • How to safely send newsletters on VPS (SMTP) w/ non-hosted domain as "From" email?

    - by Andy M
    Greetings, I'm trying to understand the safest way to use SMTP. I'm considering purchasing a second virtual server mainly for email sending, on which I will set up PHPlist (a free open-source mailing program), so we have the freedom to send unlimited newsletters (...well, 10,000 per day at least, which requires a VPS rather than shared hosting). Here's my current setup with a paid mass-mailing software: I have a website - let's call it MyHostedDomain.org. I send newsletters with the From / Reply To address as [email protected], which isn't being hosting by me but I have access to the email account. Can I more or less safely set this up with an SMTP server on a VPS? i.e. send messages using [email protected] as the visible address, but having it all go through my VPS SMTP? I cannot authenticate it, right? Is this too risky a practice? Is my only hope to use an address with a domain on the VPS, i.e. [email protected]? I already have a Reverse DNS record for the domain hosted on my current VPS. I also see other suggestions, like SenderID and DKIM. But with all these things combined, will this still work? I don't want to get blacklisted, but the good thing is this is a somewhat private list, and users opt-in to subscribe. So it's a self-made audience. (If it makes you feel better, this is related to a non-profit activity, not some marketing scam...it's for a good cause, I assure you!)

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • Windows XP usb drivers reinstalling upon reboot

    - by iWerner
    We have a Windows XP SP3 laptop (Acer Travelmate 7320) to which we connect a variety of astronomy equipment (a telescope, its mount, some cameras and others) all of which connect through USB. When we plug in these devices, Windows tells us that it detects the hardware and installs the driver. All of these devices then function correctly using the software that came from the vendor (unfortunately, one of the vendors does not support Vista 64, and that is why we're using our XP laptop). However when we reboot the computer we experience a variety of symptoms: Windows reports that it found new hardware for some of the devices and tries to reinstall their drivers, and for some of the other devices needs to be unplugged and plugged in again before they are detected again by the operating system, in which case Windows still tries to reinstall their drivers. It is as if Windows does not remember that it has already installed the drivers. Is this a common problem on Windows XP? If so, what can be done about it? Should we rather be looking at the laptop's firmware and drivers? We've looked into updating the drivers for the chipset, but this did not solve the problem. Thank you in advance.

    Read the article

  • Migrating from one linux install to another: How to keep the second disk around?

    - by Jim Miller
    I've got a linux box running Fedora 19 that I want to move to CentOS 6.4. Rather than trying to do something fancy with the current disk (which has also accumulated a lot of sludge over the years), I'm going to get a new disk, put CentOS on that, and then move the to-be-preserved bits of stuff from the old disk to the new one. I haven't done this yet, but I presume it should be semi-straightforward -- do the CentOS install on the new disk, mount the old disk on /olddisk or somesuch, and start copying. However, I'm not sure how to handle getting the machine to recognize the new empty disk as the target of the CentOS install (I suppose I can just pull the old disk during the installation), remember that this is the intended boot disk once the install has happened), and tweak /etc/fstab (right?) to set up the old disk on the desired mount point. (Both disks are, or will be, SATA.) I could probably hack it together without losing too much hair or doing too much damage, but could anyone offer some advice that would get/keep me on the right track? Thanks!

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >