Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 330/416 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Blue screen of Death on Install

    - by Toby Allen
    I have a machine with Windows Vista Installed. It has an Intel X25 SSD as the System Drive I want to reinstall (I plan to format and overwrite Vista) with XP. When I boot up using the Dell XP CD it loads the initial drivers then i get a Blue Screen. This is quite concerning. The installed OS works ok, but its giving problems so I want to remove it. Should I just format the SSD and try again? Will this make any difference? Can I do something to avoid hitting the Blue Screen? Its possible I had corrupt sectors on one of the other disks, will a new XP install use the System drive or drive 0? Can I force the install to use a specific drive when installing? Error: *** STOP: 0x0000007B (0xF78D2524,0x0000034,0x00000000,0x00000000) I never did find the answer, however I removed the SSD and tried to install on other disk - CRASH I disconnected the other disk and tried to install with only SSD plugged in - CRASH I removed 1 block of RAM - CRASH I used a windows 7 CD - NO CRASH

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Outbound ports to allow through firewall

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support commonly required resources: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working. I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Silent install of Japanese Language Pack in Win7

    - by Doltknuckle
    Every year, due to re-imaging, I am forced to find a way to install the Japanese language pack on a collection of 30 computers. Each year I look for a way to automate this process, and each year I am forced to do this manually. Maybe this year will be different. Has anyone had any luck with installing and configuring far east language support for windows 7 without user interaction? I have already downloaded kb972813 and have a way to get it out to the computers. What I normally do is this: Run the EXE, use the default settings. Open up language settings and create the JP keyboard. Configure the language bar settings. Copy settings to default user. Delete the local user cache. Sign the different user accounts in to make sure that the default settings are correct. This whole process takes about 10 minutes, multiply that out by 30 machines and you are looking at a 5 hour process. If I can log into all of the computers at once, I can normally cut that down to about an hour. Any ideas would be appreciated. Thanks in advance

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • How to make Microsoft JVM work on Windows 7?

    - by rics
    I am struggling with the following problem. I cannot install MS JVM 3810 properly on Windows 7. When I start Interner Explorer 8 without starting any java 1.1 programs choosing Java custom settings under Internet options causes the crash of the browser. I have some Java 1.1 programs that work well in Internet Explorer 8 on Windows XP after the installation of MS JVM 3810. I know that it is not advised to use this old JVM but it is not a short-term option to port the programs in newer Java since it contains 3rd party components. Complete rewrite is a long-term plan. Strangely jview and appletviewer (jview /a) works from a console so the MS JVM 3810 is not completely busted just IE 8 does not like it. The problem with the appletviewer is that it cannot connect to the server even if both signed and unsigned content in Java custom settings have been set to Enable all. (Since Java custom settings was unreachable due to the crash the modifications - including My computer - were performed through the registry and pre-checked to behave correctly on Windows XP and Internet Explorer 8.) If jview was working then I could at least think of a workaround. Is there a way to configure MS JVM or jview properly on Windows 7? Another options would be: Checking Internet Explorer 9 Beta. Using virtualbox and Windows XP older IE in it. Delaying Windows 7 upgrade. ... Update Finally we have modified all the programs to work parallelly as applet and application as well. This way the programs can still be used from browser on older Windows versions. On Windows 7 the applications are started from the desktop. Installation to all user machine can easily be solved since they already have a large common application drive. The code update is fortunately only a few lines of modification: including a main method in the applet class. Furthermore instead of the starting html page a bat file is used to set the classpath before the startup with jview.

    Read the article

  • Truecrypt and hidden volumes

    - by user51166
    I would like to know the opinion of some users using (or not) the hidden volume encryption feature of Truecrypt. Personally until now I never used this feature: on Windows I encrypt the system drive as a standard volume, on GNU/Linux I encrypt using LUKS which is Truecrypt's equivalent to standard volume. As for data I use the standard volume approach as well. I read that this feature is nice and all, but it isn't really used by most people. Do you use it or not? Why? Do you only store inside it VERY sensible data or what else? Because technically speaking doing a hidden volume which has (almost) the same size as the outer one doesn't make sense: the outer volume will be encrypted but no data will be on it, which will appear very strange. So not only one has to plan which data store where, but has even to remember each time to mount the outer volume with hidden volume protection (otherwise there'll be a data loss when writing to it). It's a bit messy: hidden OS + outer OS + outer volume + hidden volume = 4 partitions :( Similar question about the hidden operating system (which I don't use [yet]).

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • Windows 7: Dual Monitor Issue

    - by sdoca
    I have a Dell laptop with a docking station and two external monitors hooked up. I'm running Windows 7 64 bit. Originally, the two monitors were the same - Dell 1908FP. I have replaced one of them with an HP LA2405. I have set the HP 24" (1920 x 1200 connected via DVI port) as montior 1 and extended to use the Dell 19" (12080 x 1024 connected via VGA port) as monitor 2. I have set my power plan so that when the laptop is plugged in, it will turn the monitors off after 10 minutes, but it will not put my laptop to sleep. However, now that I have the HP monitor, when I unlock my laptop and the monitors come back on, all my windows are resized and shifted to the top left corner of the HP monitor as if they had been resized for display on the smaller Dell monitor. A co-worker has the exact same laptop and monitor configuration but doesn't experience this issue, so I figure there's some configuration that's different but we can't find it. I haven't been able to find any mention of a similar problem doing an internet search, but I'm not really sure what terms to use in my search. Anybody have any suggestions as to what may be causing the issue? OS setting? Monitor setting? EDIT: My laptop is using the Intel Graphics and Media Control Panel/drivers.

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • Is there an simple but good To Do Manager app for the Mac?

    - by Another Registered User
    Every morning I think about what I am going to do today. So I take a paper and start to write things like: [ ] Call Mr. XYZ [ ] Answer Support E-Mails [ ] Reduce website header height by 20 px [ ] Create new navigation bar icons And every time I'm done with something, I paint a checkmark in this square. On paper. It would be fun to have something like this as an application. But I don't want a heavy project management tool or integration with email. It should be like download, install, use without fat configuration and steep learning curve. usually I don't schedule my to do's, I just write down every day what I want to accomplish today. For my experience it doesn't make sense to plan what to do next week, because next week everything looks totally different. Would be cool if such a simple utility exists. At the moment I try just using textEdit and deleting rows which are done. With a nice interface, this would be much more fun.

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • using nginx with proxy_pass on a subdomain

    - by marcus3006
    a have a rails app that should listen on the subdomain redmine.example.com (using proxy_pass). all other requests for *.example.com should just redirect to a normal index.html. Here is my configuration: server { server_name www.example.com example.com; root /home/deploy/static/example; } upstream redmine { server unix:/tmp/redmine.socket fail_timeout=0; } server { # you could put a list of other domain names this application answers server_name redmine.example.com; root /home/deploy/rails/redmine/public; access_log /var/log/nginx/redmine_access.log; rewrite_log on; location * { proxy_pass http://redmine; } location ~ ^/(assets)/ { root /home/deploy/rails/redmine/public; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } } anyone knows what's going wrong here? requests to example.com and www.example.com are handled correctly. when i try to acces redmine.example.com = "couldn't resolve host"

    Read the article

  • How to get IE to open JavaScript as text

    - by Pete
    I am running IE 9. Up until last week sometime, if I would put the URL of a JavaScript file in the address bar, it would show the JavaScript as text in the browser window. Now when I do that, it wants to download the JavaScript file. How can I revert it to the previous handling? This is annoying since I'm developing a web application and if I can get it to display the .js files as text in the browser, then I can refresh it to force the cache to update. Update: I've tested on several co-workers machines. For some, browsing to .js files renders them in the browser (IE 9 in all cases). In others, it asks for a download. File associations don't seem to have any effect. One co-worker we tested with IE and Chrome. IE wanted to download it, but Chrome rendered it as text. This makes me think it's an IE issue and not an OS issue.

    Read the article

  • Conflicts with file from package mysql-5.0.77

    - by Whiteyq
    I'm trying to install APC (Alternative PHP Cache) on a CentOs dedicated server. I've everything done apart from configuring phpize. Running :yum -y install php-devel gives me the following error file /usr/share/mysql/charsets/Index.xml from install of mysql-libs-5.1.57-1.el5.art.x86_64 conflicts with file from package mysql-5.0.77-4.el5_5.3.i386 etc etc for other languages So, i think the mysql version i have is too old & i more than likely need to upgrade mysql to version 5.1. Im reluctant to do this as a) its a live server (although only 3/4 domains) b) ive read ill read to recompile php if i upgrade To add to this i have plesk installed for managing domains & might need reinstalling/reconfiguring also. sorry for the long intro but its my first post & best to give as much info as possible, so my question is basically Is there any way i can run :yum -y install php-devel to get phpize working to complete installation of APC for the version of mysql i currently have installed? ie 5.0.77

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • How to route outbound traffic to specific domain "XYZ.org" via a specific NIC or public/static IP?

    - by user139943
    Within the next week or so, I'll be setting up an AT&T U-verse modem with 5 usable static public IP addresses. I plan to register a domain name to 1 of the 5 static IPs (remaining 4 unregistered), and run a website from a single server setup in my home LAN. I'll skip the long winded reason why, but I need to somehow route outbound traffic (originating from my server) destined for one public domain (i.e. http://www.sample.org) through one of the UNREGISTERED static IP addresses ONLY. Basically, I want this public domain to see connections coming from an IP address and not my domain name. If it makes it easier, this can apply to all outbound traffic from my server as long as it doesn't impact users browsing my website! Inbound connections should go through the domain name / registered public IP. Can I accomplish this with my single server with one or multiple NICs? Do I need multiple servers and set one up as a proxy? Please help as my background is in software and not networking, and I don't think I can accomplish this at a software level (e.g. Java). Thanks.

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • Suggestions for transitioning to new GW/private network

    - by Quinten
    I am replacing a private T1 link with a new firewall device with an ipsec tunnel for a branch office. I am trying to figure out the right way to transition folks at the new site over to the new connection, so that they default to using the much faster tunnel. Existing network: 192.168.254.0/24, gw 192.168.254.253 (Cisco router plugged in to private t1) Test network I have been using with ipsec tunnel: 192.168.1.0/24, gw 192.168.1.1 (pfsense fw plugged in to public internet), also plugged in to same switch as the old network. There are probably ~20-30 network devices in the existing subnet, about 5 with static IPs. The remote endpoint is already the firewall--I can't set up redundant links to the existing subnet. In other words, as soon as I change the tunnel configuration to point to 192.168.254.0/24, all devices in the existing subnet will stop working because they point to the wrong gateway. I'd like some ability to do this slowly--such that I can move over a few clients and verify the stability of the new link before moving critical services or less tolerant users over. What's the right way to do this? Change the netmask on all of the devices to /16, and update gateway to point to the new device? Could this cause any problems? Also, how should I handle DNS? The pfsense box is not aware of my Active Directory environment. But if I change DNS to use the local servers, it will result in a huge slowdown as DNS queries will still be routed over the private t1. I need some help coming up with a plan that's not too disruptive but will really let me thoroughly test the stability of the IPSEC tunnel before I make the final switch. The AD version is 2008R2, as are the servers. Workstations are mostly Windows XP SP3. I have not configured the 192.168.1.0/24 as a site in AD sites and services.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >