Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 151/230 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • nginx+passenger +static websites= problems

    - by Eugene K
    I've got a Rails app that nginx serves through passenger. I'd also like to serve some static content for a different domain name. But when I add another server block to my config, both websites become unavailable returning HTTP 204. What have I done wrong? What can I do to fix it? Here's the http block of my nginx.conf: https://gist.github.com/4243256 Here's what I added : server { listen 80; server_name website2; root /var/www/website2; location / { index index.html; } } It's going to be a Rails app as well at some point in the future (though I'm not really sure about that, maybe I'm going to use a different back-end solution.) Either way, I don't want anything dynamic to eat away the resources just yet. As of now, this website consists of nothing but index.html and stylesheet.css files in the root directory. What should I do? Thank you in advance. Sincerely yours, Eugene.

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Wicked VNC Viewer acting out on Windows desktop and CentOS 6.3 server

    - by Johnny Lee
    What we have here is the only way to open the TightVNC viewer on this Windows XP desktop is to have a TigerVNC viewer open on the CentOS 6.3 server desktop. I know it sounds really weird and we’re looking for hints to make it go away. Any ideas? Here is the recipe: We are using Putty on the Windows desktop as SSH (Secure Shell) and a Terminal Emulator. We open and login to Putty then open a login to TightVNC viewer. After many failed attempts, much Googling, and lots of reading to no avail I decided to open the TigerVNC viewer on the CentOS 6.3 server by way of the GNOME desktop Application menu -- Internet tab. After opening and logging into the TigerVNC viewer on the CentOS 6.3 Server, Voila!! We have a remote desktop opened on the server. But what was an interesting discovery was that the TigerVNC viewer on the server had a request on the desktop that was not on the server desktop. This turned out to be a login request that once the password was entered it opened the TightVNC viewer on the Windows desktop. Weird huh? -Why is that password request showing up on the CentOS 6.3 server in the TigerVNC viewer as oppose to showing up on the Windows desktop when logging in using TightVNC viewer to the server?

    Read the article

  • Is there any way to customize the Windows 7 taskbar auto-hide behavior? Delay activation? Timer?

    - by calbar
    I'm becoming increasingly frustrated with the way Windows 7 handles showing a hidden taskbar. It's incredibly over-eager to pop out and obscure what I'm really trying to interact with, requiring me to move the mouse away, wait for it to auto-hide again, then resume what I was doing but more deliberately. After closely examining the behavior, it appears that a hidden taskbar "peeks out" from the edge by 2 or 3 pixels, and slowly moving your mouse into this area activates it; you don't even need to touch the edge of the screen. I would love it if there was a way to customize or change this behavior. Ideally, the taskbar would only pop out if you are actively "pushing" the edge of the screen it is hidden on. So activation only occurs once you've reached the screens edge and continue to move the mouse past a customizable threshold. Alternatively, a simple activation delay would suffice as well. So only if the mouse remains in that 2-3 pixel area (a.k.a. on the taskbar) for greater than a customizable amount of time does it pop out again. This would only be a fraction of a second. Often times the cursor simply "careens" off the edge of the screen while trying to focus on something nearby. Anyway, if there are any registry settings or utilities that can achieve either of these effects, that would be great! Thanks for your help.

    Read the article

  • Google Chrome mouse wheel takes keyboard focus

    - by Steve Crane
    I recently switched the default browser on all my machines from Firefox to Google Chrome. In general I’m loving Chrome but there is one behaviour that is driving me nuts. Scrolling with the mouse wheel takes keyboard focus away from the document. Here’s what happens. I’ve just opened a page or switched to a new tab and the page has keyboard focus as I can use the keyboard up/down and PgUp/PgDn keys to scroll it; no problem there. But if I then use the wheel on my mouse to scroll the page, it loses the keyboard focus and no longer responds to up/down, PgUp/PgDn, or in fact any other keyboard keys. I have to click on the page background to restore the keyboard focus. This is a minor inconvenience for scrolling but where it really drives me nuts is in Google Reader and Gmail where I use keyboard shortcuts a lot. Here I find that I scroll the article or e-mail I’m reading with the mouse wheel then get no response when I press j/k to move to the next or previous article or e-mail. I am using Windows 7 and the Chrome dev channel (version 4.0.249.43).

    Read the article

  • HA for Resque & Redis

    - by Chris Go
    Trying to avoid SPOFs for Resque and Redis. Ultimately the client is going to be PHP via (https://github.com/chrisboulton/php-resque). After going through and finding some workable HA for nginx+php-fpm and MySQL (mysql master-master setup as a way to simply master-slave promotion), next up is Resque+Redis. Standard install of Resque uses localhost Redis (at DigitalOcean). I am heavily depending on Amazon Route 53 DNS failover to try to solve this. resque1.domain.com points to localhost redis (redis1.domain.com) = same server resque2.domain.com points to localhost redis (redis2.domain.com) = same server Do resque.domain.com with FAILOVER resque1 as primary and resque2 as secondary. What this means is that most of the time (99%), resque1 should be getting hit with resque2 as just a hot backup. This lets me just have to get 2 servers and makes sure that any hits to resque.domain.com goes somewhere The other way to do this is to break out resque and redis into 4 servers and do it as follows resque1.domain.com - redis.domain.com resque2.domain.com - redis.domain.com redis1.domain.com redis2.domain.com Then setup DNS Failover resque.domain.com - primary: resque1 and secondary: resque2 redis.domain.com - primary: redis1 and secondary: redis2 I'd like to get away for 2 servers if I can but is this 2nd setup much better or negligible? Thanks, Chris

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • Ripping a home video VCD on Linux or Windows with VLC or otherwise

    - by user259774
    I have a VCD with 22 minutes of video on it. I would like to retain this footage and throw away the VCD. I can play the whole thing with VLC ("open disc - vcd - /dev/sr0 - play"): all 22 minutes of the main track. I don't believe there's any other content aside from the main track. I can seek to anywhere I want to within the 22 minute track. If I mount /dev/sr0 /media/vcd and then try to copy the only file from the MPEGAV folder, I get an I/O error, with an empty destination file. VLC has a "convert" option in addition to "play". When I use this I actually get a good OGG file back, after it runs through the video in painful real-time. I guess it dubs it frame-by-frame. But the file is only 10 minutes long, leaving 12 minutes off of the track. Handbrake doesn't detect it's track titles, unfortunately. I don't know if I should start getting involved with GNU ddrescue or if it's because VCDs somehow encode their data sectors differently. Anyway, I'm in way over my head and if anyone knows how I could get that video track off the thing, feel free to share! Edit: I should note that I also have access to a Windows computer

    Read the article

  • E-mail hosting provider that can set up aliases with wildcards

    - by Richard Downer
    I am looking for an e-mail hosting provider that allow e-mail aliases containing wildcards. In more detail: I own my own domain. I want an e-mail hosting provider to manager e-mail for my domain. Now, to help deal with spam, I often give different e-mail addresses to different organisations. These e-mail addresses always start with the same prefix, but then differ. So, for example, I might give out these e-mail addresses: [email protected] [email protected] [email protected] I want to be able to go to the e-mail provider's control panel and set up an e-mail alias like this: [email protected] -- bounce/discard (because this address has been sold to spammers) joe-*@sample.com -- redirect to [email protected] What's I don't want to do is either have to set up every single e-mail address individually (because I make them up whenever I need them), nor do I want to have a general catch-all for any unrecognised address in my domain (because I don't want to be carpet-bombed with spam when a spammer runs a dictionary attack against my domain name.) Although this seems like a useful feature to have, it seems to be a little-known feature and I've not seen anybody advertise this feature. My current hosting provider offer this but I want to move away from them, so I need another provider that will continue to work with all the e-mail addresses I've been using for years. Alternatively, I could use mail server software that runs on Windows - I have seen some commercial packages offering this feature but they cost more than I can afford - are there any suggestions for low-cost software packages?

    Read the article

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • PHP/Linux File Permissions

    - by user1733435
    May I ask a question about file permission. I set up Ubuntu server where Apache got running. I have simple php upload form and able to upload file to /var/www/site/uploads as follows. sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 1736 drwxrwxrwx 2 www-data www-data 4096 Oct 18 02:53 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 00:42 ../ -rw-r--r-- 1 www-data www-data 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 www-data www-data 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 www-data www-data 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg Is there anyway to upload files and they show up as -rw-r--r-- 1 sandbox sandbox 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg so that I could straight away feed them to waiting/running shell script. Right now waiting script(move,checksums,rename,resize,etc) unable to do anything to uploaded files with attributes of www-data. If I just do as local account, such as sandbox@sandbox-virtual-machine:/var/www/site/uploads$touch testfile then the script is able to run as I would like to. Any suggestion would be grateful,thanks in advance as well. Thanks for everyone giving help to me,that I was able to progress. Now I am close to getting solved and append the output sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 388 drwxrwxrwx 2 www-data www-data 4096 Oct 18 04:22 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 04:17 ../ -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 04:21 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 219808 Oct 18 04:20 adafruit_pi.png -rw-rw-r-- 1 sandbox sandbox 0 Oct 18 04:22 test How may I set permission to uploaded files like 'test' only w difference in middle group. Such as adafruit_pi.png Vs test. Which statement shall I insert to php code,please?

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • How can I stream audio signals from various devices/computers to my home server?

    - by Breakthrough
    I currently have a headless home server set up (running Ubuntu 12.04 server edition) running a simple Apache HTTP server. The server is near an audio receiver, which controls a set of indoor and outdoor speakers in my home. Recently, my father purchased a Bluetooth adapter, which our various laptops and cellphones can connect to, outputting the music to the speakers. I was hoping to find a solution that worked over Wi-Fi, namely because it won't cost anything (I already have a server with an audio card), and it doesn't depend on Bluetooth. Is there any cross-platform (preferably free and open-source) solution that I can use which will allow me to stream audio to my home server, over my home network, from a wide variety of devices (laptops running Windows/Linux or cellphones running Android/BB/iOS)? I need something that works at least with Windows and Android. Also, just to clairfy, I want something that simply allows devices to connect to my server and output an audio signal without any action on the server end (since it's a server hidden away near my receiver). Any subsequent connection attempt should be dropped, so only one device can be in control of the stereo at once.

    Read the article

  • Issues installing new drivers

    - by Luke
    I have a Windows XP Home SP3 system that won't detect anything on USB. It works on Ubuntu Live (off USB), and the USB keyboard and mouse work in the BIOS. Physically speaking, I'm sure it's fine. I installed the SMBus drivers and the USB driver from the motherboard's website, adn that went fine. If I plug anything in, it can detect the type of thing it is (i.e. keyboard, mouse, flash drive, etc) and even the name sometimes (i.e. Microsoft 5 button mouse), but won't accept any drivers. I have tried putting the Windows CD in the drive, but that didn't help. I have scanned for viruses and CHKDSK with no issues, and ran a MemTest86 with no issues. I am limited to one PS/2 connection for inputs, so I'm using the keyboard and haven't tried WU yet. A colleague suggested trying a new USB controller, so I put in a PCI one that only had drivers for 9x on the CD, so I assume that XP has them built in. It goes through the Found New Hardware wizard, but never actually finds drivers. I have also tried running SFC /SCANNOW and System Restore. SFC just flashes and goes away, making me believe it may be a hidden virus somewhere, but everything else seems to work, including MSE. I have reason to believe it's just an issue with detecting hardware, since even the USB Controller card can't seem to find drivers, but it can detect WHEN a USB device is connected Anyone else run into this, or have a suggestion short of re-installing Windows?

    Read the article

  • How to improve Windows Server 2008 R2 to handle many connections?

    - by invisal
    It has been a few days so far that I am trying to figure how to solve this problem. First of all, I am running a website with an average daily page view of 350,000. Previously, all ads management (tracking click and impression that each ads has served) and content were served in a single server with the following spec: Server 1 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 8 GB Storage: 2 x 1 TB hard drives Bandwidth: 10 TB per month To improve our website speed, I decided to separate the ads management script to another dedicated server because we have more than 15 advertisers to 30 advertisers per each page. Server 2 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 4 GB Storage: 2 x 300 GB hard drives Bandwidth: 10 TB per month The Problem The problem is that Server 1 can handle both content and ads system. Now, that I take away the ads system and put it at Server 2. Server 2 can barely serve only ads system. Test First of all, I moved 75% of the ads to Server 2. And then, perform a ping to server: ping -t xxxxx. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Reply from xxxxx bytes=32 time=289ms TTL=116 Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=348ms TTL=116 Reply from xxxxx bytes=32 time=284ms TTL=116 Then, I moved 100% of the ads to Server 2. Then, perform a ping to server again. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Request timed out Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Request timed out Request timed out Reply from xxxxx bytes=32 time=284ms TTL=116 Attempts Increase MaxUserPort and TcpNumConnection Restart the server Increase IIS Max Instances and Instance MaxRequests Server Resource Only 10%-15% of the network connection is used Only 10%-15% of the CPU is used Only 25% of the memory is used

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Erase just the free space on my hard drive

    - by Patriot
    I'm about to give away an older computer with just the Windows XP operating system intact and all other programs uninstalled. However, upon peeking at the "free space" with software called "Recuva", I notice lots of deleted things that could be recoverable. Some of these include sensitive data files, pdfs, and other personal items that I would not want retrieved. I ran a program called "Eraser" to try and overwrite that data, but it failed to do an adequate job. I also tried to do the job with "Glary Utilities" but it failed too. Short of installing a new, very cheap hard drive and re-installing the bare bones operating system, I'm out of ideas. EDIT - WOW!!! I was not really expecting this many GREAT ideas. My next question is this. If I go the DBAN route and truely wipe the hard drive, then restore my disc image (I use Acronis True Image) will it also restore the free space data? Does imaging just copy readable data? I have an old image of when the OS was first installed.

    Read the article

  • Windows 7 on an EEE PC 901 - Is it a practical change?

    - by Dave
    I am currently running WinXP on my EEE PC 901, and I'm happy to say that it runs really well. But this did not come with out significant manipulation of the OS. Here's the basic steps I took: Install XP Modify registry to install Install bare essential drivers Relocate page file to d:\ (remember, this model has two SSD's, one roughly 3.6gb, and the other roughly 16gb - XP won't run on the bigger drive, only the smaller one) Install remaining drivers skip normal updates, install service pack 2 straight away. modify system registry to place service pack backup folder into new Program Files directory on D drive (where software is being installed). Change My Documents folder to sit on D drive. Install .net framework Install remaining updates and service pack 3 (the hidden backup folders in the c:\Windows directory are deleted after every update as well as the contents of the service pack downloads folder in order to continually free up space). I have also found that Disktrix UltimateDefrag to be brilliant at keeping the system clean and tidy. This is roughly the order I did things in. In this configuration the machine works really well. QUESTION: Can this kind of configuration be implemented with Windows 7 to achieve the same result on this machine? Thanks in advance. Dave.

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Can't Login to phpPgAdmin

    - by Devin
    I'm trying to set up phpPgAdmin on my test machine so that I can interface with PostgreSQL without always having to use the psql CLI. I have PostgreSQL 9.1 installed via the RPM repository, while I installed phpPgAdmin 5.0.4 "manually" (by extracting the archive from the phpPgAdmin website). For the record, my host OS is CentOS 6.2. I made the following configuration changes already: PostgreSQL Inside pg_hba.conf, I changed all METHODs to md5. I gave the postgres account a password I added a new account named webuser with a password (note that I did not do anything else to the account, so I can't exactly say that I know what permissions it has and all) phpPgAdmin config.inc.php Changed the line $conf['servers'][0]['host'] = ''; to $conf['servers'][0]['host'] = '127.0.0.1'; (I've also tried using localhost as the value there). Set $conf['extra_login_security'] to false. Whenever I try to log in to phpPgAdmin, I get "Login failed", even if I use successful credentials (ones that work in psql). I've tried to go through some of the steps noted in Question 3 in the FAQ, but it hasn't worked out well so far there. It likely does not help that this is my first day working with PostgreSQL. I'm farily familiar with MySQL, but I have to use PostgreSQL for the project I'm working on. Could anyone offer some help for how to set up phpPgAdmin on CentOS 6.2? If I've done something terribly wrong in my configuration so far, it's no big deal to blow something/everything away, as it's not like I've stored any data there yet! I appreciate any insight you may have!

    Read the article

  • Changing MX records in named zone file

    - by Paul England
    I forgot how all this works. I have a GoDaddy account, using my own DNS and whatnot. I'm having trouble getting my email to work. They said I need to update my MX records. basically, I have the following. 184.168.30.42 is the domain's IP address, obviously. gamengai.com. 14400 IN NS n1 gamengai.com. 14400 IN NS n2 ns1 14400 IN A 184.168.30.42 ns2 14400 IN A 184.168.30.42 gamengai.com. 14400 IN A 184.168.30.42 localhost 14400 IN A 127.0.0.1 ftp 14400 IN A 184.168.30.42 www 14400 IN A 184.168.30.42 mail 14400 IN A 184.168.30.42 subdomain 14400 IN A 184.168.30.42 gamengai.com 14400 IN MX 10 mail Mail doesn't work though... they say to make the following change: 0 smtp.secureserver.net 10 mailstore1.secureserver.net So should the last line point to mailstore1.secureserver.net instead of mail in the last field? What about the other line? I had this working at one time, but it's totally gotten away from me. It's a virtual dedicated server and their support for this stuff is pretty bad... almost as bad as my admin skills since I went the programmer route.

    Read the article

  • How to wire 20 computers and 20 phones and 1 server into LAN?

    - by John Smith
    I have currently 3 switches Two Netgear JFS524 with 24 slots, One Belkin with 16 slots. Server DSL Internet Router. Main question is how to connect switches together, two Netgear's are next to each other, yet one is about 100 feet away and holds about 5 computer and 5 phones. If i connect them with only 1 wire will that limit bandwidth? e.g. all 23 computers will be limited to speed of one CAT5e cable? If i connect switches with 2 cables will this give speed boost? What's the ideal scenario should i just move the third switch next to other two? Will the speed of computer connected to white switch be same as computer connected to top switch? Will moving white switch right next top switch and having 16 wires comming 100 feet instead of 1 wire comming 100 feet make it faster? EDIT 1: I actually have NETGEAR ProSafe GS105 Gigabit switch its only has 4 ports in it though, you think i can have use of it in current setup? Like connect all 3 switches and server into it and keep internet router and phone server on one of the slower switches EDIT 2: Everyone mention gigabit switches, but will they do any difference with 10/100 network cards? I then have to use gigabit cards in every computer too? I could in server perhaps, but users will be 10/100

    Read the article

  • Networking Home Office

    - by Matt
    I'm in the process of building an office in my garden. It's about 25m away from my house. I'd like to run a wired network connection to the office. I'd rather not go down the powerline route, as speeds don't seem great, and I'm likely to want to be moving a lot of data around on the internal network. I have an electrician who is running armoured electrical cable to the office, and is providing conduit for me to run network cable. My questions are: 1) What type of cable to run 2) How I terminate/connect it at both ends I could get something like armoured cat6 utp solid core (like this: http://www.netstoredirect.com/cat6-cable/289166-external-armoured-cat6-utp-solid-cable-price-per-metre.html) which seems fairly robust, but then I have to terminate it. Additionally, where the cable enters my house, there is about another 15m to where my router is situated. I also read this artice: http://www.audioholics.com/audio-video-cables/bjc-cat-network-cable-quality-interview which scared me into realising I don't know what I'm doing!! particularly with termination. Or I could get an "cat6 external patch cable" (e.g http://www.netstoredirect.com/rj45-network-cables/239231-external-cat6-utp-ldpe-rj45-patch-leads.html) and run that in the conduit, and work out how to terminate it at the house end. At the office end I guess I can just plug it into a switch. Any help? Thanks

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >