Search Results

Search found 33468 results on 1339 pages for 'behaviour change'.

Page 430/1339 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • Cancelling Window 7 shutdown diables power button

    - by Jens
    Normally, pressing the power button once initiates a shut-down in Windows 7. If any programs are still running that will not quit (e.g. waiting for a dialogue response), Windows overlays the screen with a dialogue allowing the user to cancel the shut-down. I've just noticed that on two different systems here, using this cancel option disables the shut-down via power button. The power button can still be used to kill the system by holding it for a few seconds, using the Start menu button to shut the PC down still works as well. Steps to reproduce: Open Notepad, type a few characters. Do not save. Press the computer's power button. Wait until the dark screen appears. Press cancel. Press the power button again. Notice how nothing happens. What is the reason for this behaviour, and can it be disabled to always try and shut down the PC when the power button is pressed?

    Read the article

  • RDP with multiple monitors, display preferences get reset?

    - by Martijn Kooij
    Problem: When I connect to my pc at the office via RDP all the application windows I had previously carefully placed on either monitor 1 or 2 will be "scrambled". Either all applications show on monitor 1 and monitor 2 is empty, or they have switched 1 <- 2. Expected behaviour: When I connect I see all the application windows on exactly the same position and in the exact same size as I left them the night before. I have the exact same monitors at home as I have at work: Primary 2560x1440, Secondary 900x1440. Yesterday I tried switching the physical cables on the host machine hoping that the hardware order of the monitors was the difference. But this morning my secondary monitor was completely blank, not even the taskbar (which I had set to ONLY show on the secondary). Somewhere there must be something to help Windows understand which physical monitor is which virtual RDP monitor is which RDP "server" monitor... Are there more options than switching the cables? This one has been bothering me for a long long time now, I hope someone has a solution or workaround for me. Edit I want to use both monitors, so I have checked the "Use all monitors" setting in the RDP client. For example I leave my mail and total commander on the right monitor, and visual studio and Firefox on the left monitor. When I connect to RDP I want to see those applications on the same positions and sizes.

    Read the article

  • What could cause a dual-monitor PC to suddenly stop using one of the screens?

    - by raldi
    I've got a dual-monitor setup using a GeForce 7900GT that was working fine for over a year... then suddenly, only one of the screens works. It's not OS-related, because even on startup, only one screen displays the BIOS checks. In the past, both screens would show it together. I didn't change anything to trigger this The monitor that gets a signal is random -- sometimes the one on the left goes black, sometimes the one on the right. The monitors and their cables are good -- I can switch both or either, and I get a signal just fine. They're plugged in, too. It's not the video card, either -- I have an identical 7900GT in another machine, and swapping the two didn't fix anything. It's not dust on the motherboard -- I pulled everything out, cleaned it off, checked for obvious damage, put it all back together, and no change. My next two steps are going to be to reset the CMOS info and to try swapping out the motherboard. Before I do that, does anyone have any other ideas?

    Read the article

  • Non-restored Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Configuring iPad Mail app & Gmail app with different accounts? [migrated]

    - by Steve Crane
    I prefer to use the Gmail app over the standard Mail app on my iPad for reading my personal Gmail (I delete a lot of mails, newsletters, etc., after reading and this is one tap in Gmail and several in Mail). I have them set up so my personal Gmail uses the Gmail app and my work email is set up to use the standard Mail app. This all works fine except for one problem. If I'm in Gmail or Mail and send an email it sends from the relevant email address as expected. My problem is that when I share something via email from Safari or another app it sends from the email address configured in Settings for Mail (the work one) and I would prefer to do such sharing from my personal email address. Does anyone know if there is a way to achieve this? I could switch the addresses to use the other app but as I never delete work email and delete personal mail at least 50% of the time, the behaviour of the apps is perfect the way I have them set up; if only I could solve that one little problem of controlling where shared items are sent from. I am using an iPad 2 with iOS 5.1 should that be relevant.

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • UEFI boot options gone

    - by user1797930
    I ran into some issues booting Windows after trying to make a complete backup of the disc. After searching for information about some of the error codes, I found advise to change some BIOS settings, but instead I thought I would just "restore defaults" to make sure all settings were set as originally intended. After doing so, all UEFI boot options except for "Windows Boot Manager" are gone. That means, including the CD/DVD drive, so I cannot even boot from a recovery DVD anymore - and as explained, Windows is not able to boot either. Do you have any advice? When I added a secondary drive originally, it was automatically added to the boot options menu. Even when removing and re-adding the drive physically, the option does not appear again. I have tried unplugging power, and hold down start button for 10 seconds, and boot afterwards - no change. It's a laptop so removing CMOS battery is not an option. I have read information that it is an issue with data removed from NVRAM, but I am unable to find a way to recover it. "Add new boot options" requires a path - but the CD/DVD was originally available without any CD's in the drive - so there is no path available to add the drive. I did try to open EFI shell, but it seems not to be embedded in the UEFI/BIOS. It just says "not found". I'm really lost here - any advice is appreciated.

    Read the article

  • Trying to migrate old server to new. Getting duplicate name errors

    - by SpaceCowboy74
    I have an existing server on my network that is running under windows 2000 with SQL Server 2000 on it. We are in the process of moving the server to a windows 2008 platform, with SQL 2008 as well. A few changes are happening though. For one, applications that were on the old server, will now be on a new application server. The issue is, the developers of the original applications hard coded the server name in the apps and/or batch files. I could change all the code, but that would require weeks of work. My original idea was to change the hosts and lmhosts files to point to the new servers with a different IP. So i implemented the following where oldserver was the original server and server is the new one brought online: hosts: 192.168.1.10 oldserver 192.168.1.15 server lmhosts: 192.168.1.10 oldserver #pre 192.168.1.15 server #pre Problem is, when i try to do this, i get the following errors: \\server\c$ Logon Failure : The target account name is incorrect. and \\oldserver\c$ A duplicate name exists on the network. I know about renaming servers in AD, but can't do so yet as the original server is in production and i cannot rename it without breaking a lot of things at the moment. I'm wanting to do a proof of concept to the management before renaming the servers. Any idea how i should resolve this?

    Read the article

  • SSL certificates: how to use it?

    - by Rod
    I have a central server and I want to purchase a SSL certificate for it. The architecture is based on this central server and many connected web-servers which are on the client-side (one for each user). A client could access both the main server and its local server. Moreover the two servers exchange data between them. I would like client's web browser to trust all servers and always activating https and a secure connection when connecting to them. Assuming I can name all servers on the same domain name (I was thinking about a wildcard certificate anyway), which kind of certificate or use of it can make these secure connections working? There is the possibility that main server and client side server are not connected for a while. Is possible to activate an https connection for a client to its local server in this case? When I will need to renew or change the certificate, I would like to change it just on the main server avoiding to have the need of touch all the servers on the side of clients. Can I do that in some way?

    Read the article

  • What can cause the system to freeze in a way where even the reset button takes a long time to react?

    - by ThiefMaster
    What can be the reason for system freezes that are so "hard" that even the hardware reset button takes about 3 seconds until it actually resets the system (and then it actually powers down and up again instead of doing a "clean" hard reset like when pressing it during a normally running system). Since it initially happened mainly while playing videos from YouTube I suspected the graphics card - however, I replaced it recently and it did not change it. It still happens from time to time (and sometimes more often, like a few times times in the last few hours). The system is running Windows 7 - but I don't think this matters since I don't think any software, not even the OS, can actually affect the reset button's behaviour. The PC is not overheated and the freezes happen randomly. There is also no malware on the system. The CPU is an Intel Core i7-920 on a Gigabyte EX58-UD5 mainboard. What could be the cause for this problem? Faulty RAM? I did not run a full memtest86 check yet, but I wonder if there is a more likely issue than faulty RAM - checking 12G of ram does take some time after all! There are no entries in the event log - but that's what I expected since the system freezes so hard that I doubt it has time to write anything to any log.

    Read the article

  • very slow connection to ssh server from client (but not other servers)

    - by AntonOfTheWoods
    I have an Ubuntu 12.04 laptop that is taking so long to connect to various servers (in different data centres) that it seems like a bit of a lottery whether I'll actually get a connection. If I connect to the servers between themselves it's instantaneous, and I've set UseDNS no AddressFamily inet On the servers I'm connecting to (and rebooted for good measure). I also put in the reverse DNS+IP of the cable connection I'm connecting from. If I connect from the laptop via telnet: telnet my.server 22 Then the connection is also instantaneous, so it doesn't appear to be a problem with an intervening firewall. I have the same behaviour whether I connect with the IP, a short name in my hosts or the FQDN. I'm connecting with a 50mbps (cable, sync) connection so that doesn't appear to be the problem, and when I do finally get a connection then it's a good, quick, stable one. I have tried listening on another port (8000) and that makes no difference. Web and other connections from the laptop to the machine are also very good. Does anyone have any ideas here?

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • Is there a way to create a script/BAT that changes my desktop image... if so how? [duplicate]

    - by Radical924
    This question already has an answer here: How do I set the desktop background on Windows from a script? 4 answers Okay so I just got this program that lets me lock my PC screen (this info doesn't matter much) anyways... You can run files when the program starts/locks and closes/unlocks. What I would like to do is create a script/bat that changes my desktop background to an image when I click "lock" and another script to change the desktop image when I "unlock". Is there a simple script or BAT file that someone knows of that does this??? or knows how to do this??? I would like to be able to modify it myself so it is the picture I would like to be selected. So all I would do is change the file directory of the image used on the background in the BAT file/script. EDIT: Thank you for the link! It hleped out a bit but I still have one question... I will just post it as a separate question... Thx!

    Read the article

  • What is the best time to set the IP address for a server headed to a server colocation facility?

    - by jim_m_somewhere
    What is the best time to set the IP address for a server? I have a server that I am going to install the OS on and then I am going to send it to a server colocation facility. The server is going to have Internet facing services (www, email, etc.) I can set up a "fake" IP address during install (by fake I mean private as in RFC 1918) and change the "fake" IPs to the real IPs once I set up the colocation service. The other option is to set up the colocation service...wait for them to give me the "real" IPs and use them during the OS install. The ramification are that...if I use "fake" IPs during install...I will have to wait before I set up things like SSL certs. If I wait for IPs from the colocation provider...then I can set up SSL certs that use the "correct" (as in "real") IP addresses...no changes to the certs until they expire. Do the "gotchas" of changing an IP address on a server outweigh the benefits of a quick install? The other danger with using "fake" IPs is that I could make a mistake when I go through the various files to change the IP address to the "live" IP address. Server OS: CentOS 6.2 or CentOS 6.3, 64 bit. Apps: Apache 2.4.X httpd, MySQL 5.X (will eventually use replication)

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • Monitor programs accessing my keyboard?

    - by Anti Earth
    As of a few days ago, my computer is behaving 'erratically'. When I am typing, my pointer will randomly move to another place in the text and start typing a semi-random string of characters. ("gvyfn" is common; It has typed this about 8 times whilst I composed all the text above) It often highlights part of or all the text and overwrites it. It sometimes goes into loops of pressing Control-alt-delete down, bringing up Windows 7 menu thing. It sometimes even messes with mouseclicks; they have unexpected results, like requesting admin priveledges from applications, instead of switching to their window. I believe this is because it is holding a alt-function key down. This behaviour happens periodically, in waves. It might subside for an hour, then continue to haunt me. I believe it to be a virus or malicious program. My anti-virus (Symantec) and multiply MS rootkit removers could not find anything suspicious. I've noticed that sometimes it re-maps keys, and types gibberish when I press certain keys (though no pattern is evident). I believe a malicious program has installed a keyhook on my computer. I'm wondering... - Is there a way to let me view which programs are emulating keystrokes? - Is there a way to view what keyboard hooks are installed? (I'm also at liberty to try any other techniques to remove this blasted thing. It is easily the most fustrating computer problem I've encountered). Thanks!

    Read the article

  • how to throttle http requests on a linux machine?

    - by hooraygradschool
    EDIT: here is the summery: i need to reduce max connections preferably system wide on Ubuntu 11.04 but at least within Google Chrome. i do not need or want to throttle bandwidth, Verizon seems to only care about the number of connections so that is all i want to change. also, i don't want to use firefox unless i have to, i have three other machines all using chrome and synced and i just prefer it over firefox. i use tethering for my home internet connection via my verizon cell phone. without paying for it. this works just fine for streaming netflix via my nintendo wii and pretty much every other conceivable use ive had for it. except, during heavy usage with multiple tabs open on my laptop, the network connection on my phone will just turn off, then on again, then off, but it never fully connects. i think, based on this and other questions that this is caused by verizon getting too many http requests from my phone. is there some software, script, setting or otherwise that would allow me to throttle my requests to say, 5 or 10 or whatever it turns out is 1 less than verizon is looking for, so that my cell's network connection is not lost? i would far prefer a slow down rather than complete shut off of my internet connection. i am almost certain is from quantity of requests and not related to data, because, as i mentioned, netflix will run all day without a hitch, and that uses more data than anything else i would be doing. if i had a router i am pretty sure there are settings i could easily change to only allow so many requests at a time ... but in this case, my phone is my router, so no settings. im using ubuntu 11.04 on my netbook with an htc incredible on verizon (not that the phone details are relevant) i have been trying to figure this out for quite some time, currently the only fix is ensure that all requests are stopped and then sometimes it works again, other times i have to manually turn my 3g service off and then back on. thank you so much for any assistance!

    Read the article

  • What could possibly cause my computer to power down at random times?

    - by geoffreydv
    I have recently bought a new Power Supply and a new graphics card. My PC ran smoothly for a few months now but since a couple of days I'm having a strange problem. I am trying to isolate the problem to a specific piece of hardware (because if it's either the Power Supply or the Graphics card they are still under warranty). The problem started when I was playing a game (diablo 3). My PC suddenly powered down. I was unable to turn it on again by pressing the power button. I unplugged the power cable for a few seconds and plugged it back in. This time the pc powered on but the indication light turned orange instead of white as it normally does. The fans were not spinning and I did not see anything on my screen. After trying a couple of times I gave up. Two days later I tried again and this time the PC did boot up as usual. Everything looked okay until I tested if the problem was resolved by starting Diablo again. After about two minutes it powered down again as it did the first time. If I don't run any games my PC does power down after about 3-5 hours. Another fact that might be relevant: One time the PC did not shut down immediatly, instead first my graphics "powered down" but the music I was playing kept on playing. After about 20 seconds the pc powered down completely as usual. What I also noticed is that when I boot instantly after a power down, the chance of another power down occuring is much higher. Does anyone have an idea what could be causing this kind of behaviour or has a certain tool to diagnose the specific hardware parts? Thanks Specs: Memory: 6GB Processor: Intel i5 OS: Windows 7 64 bit The PC is a Dell Studio XPS 8100 with a replaced PSU and Graphics card: PSU: Corsair CX500 (500 watt) Graphics card: AMD Radeon 6850

    Read the article

  • Iptables and system-config-firewall

    - by nivde92
    I had a set of netfilter rules set with iptables, but someone else told me to use system-config-firewall to add a rule for sharing files with Windows. (Samba) This rewrote the iptables rules file and I lost my own custom rules. I have a backup copy, but am having trouble restoring them. Edit: The server is Centos, I already tried to restore the rules with iptables-restore < /root/working.iptables.rules but for some reason the rules don't change. What are you trying to do? Trying to restore the iptable rules that I have in a backup file. What have you tried in order to make it happen? I've tried to modify the iptables file with vim, since the command iptables-restore was no help. What results did you expect? To get the old rules back. What actually happened? Nothing, when I run the command or edit the file by hand the file doesn't change at all. Maybe something else it's overwriting.

    Read the article

  • Global Email Forwarding with EXIM?

    - by Dexirian
    Been trying to find a solution to this for a while without success so here i go : I was given the task to build a High-Availability Load-Balanced Network Cluster for our 2 linux servers. I did some workaround and managed to get a DNS + SQL + Web Folders + Mails synchronisation going between both. Now i would like my server 2 to only do mailing and server 1 to only do web hosting. I transfered all the accounts for 1 to 2 using the WHM built-in account transfert feature. I created 2 different rsync jobs that sync, update, and delete the files for mail and websites. Now i was able to successfully transfer 1 mail accounts from 1 to 2, and the server 2 works flawlessly. All i had to do was change the MX entries to point to the new server and bingo. Now my problem is, some clients have their mail softwares configured so that they point to oldserver.domain.com. I cant make the (A) entry of oldserver.domain.com point to the new server for obvious reasons. I thought of using .foward files and add them to the home directories of the concerned users but that would be very difficult. So my question is : Is there a way to configure exim so that it will only foward mails to the new server? I need to change all the users so they use their mail on server 2 without them doing anything. Thanks! EDIT : TO CLARIFY MY PROBLEM Some clients have their mail point to oldserver.xyz instead of mail.olderserver.xyz I want to know if i can do something to prevent modifying the clients configuration I would also like to know is there is a way to find out what clients aren't properly configured

    Read the article

  • Is there any any merit to routinely restore a linux system, even if unnecessary?

    - by field_guy
    I do fieldwork with a number of computers running ubuntu performing critical tasks doing fieldwork. The computers are similarly configured with slight variations. Since we've had some configuration issues in the past, my boss is pressing for us to take an image of the installation on each computer, and restore each computer to that image before they are to go into the field. My preferred solution would be to write a common script that checks to ensure that the configuration of the system is correct and that the system is operational. If the computer has been verified, isn't restoring it to that configuration redundant? And are there any inherent problems with doing so? My reluctance stems from the fact that our software and configuration is subject to change in the field, but these changes must be made across all the computers. That means that when a change is made, all the restoration images have to be updated as well. The differences in the configuration of each of the computers live in /etc. In the event that restoration is required, I would prefer to keep a single image containing everything that is common to all machines, and have a snapshot of each computer's /etc directory to be used for restoring the state of that particular machine. What's the better approach?

    Read the article

  • No "New Folder" button in windows 7

    - by user1125620
    My sibling's laptop is running windows 7 x64. The torrents folder in Documents doesn't show the New Folder button. ctrl+shift+n doesn't work either. I tried EVERYTHING here: Can't create new folder from anywhere in Windows 7 ..but nothing worked. As with the OP there, running the .reg file brings an error that says something about not being able to change the registry value while something is using it. I removed one entry at a time in the .reg file until I narrowed down the ones that were causing the problem, which were in HKEY_CLASSES_ROOT/CLSID. The only different reg value, however, was in HKEY_CLASSES_ROOT\CLSID{11dbb47c-a525-400b-9e80-a54615a090c0}\InProcServer32, for which the default value was %SystemRoot%\system32\explorerframe.dll and the value trying to be set ExplorerFrame.dll. I'm on windows 7 32bit and that's the same value I have for the entry, so I doubt that's it. The only thing I think is slightly off is that there is a user group with a strange name that only has execute and read access, and I can't grant it full control. Every time I try, it acts as if it works, but doesn't change it. I tried booting into safe mode and changing it, but it did the same thing. It is the folder where utorrent puts any new downloads, so it's possible utorrent did something, though that's never happened to me before. edit: I had renamed the folder to something else to avoid the problem, and then went onto my own computer to try to figure out what was wrong (I personally don't like using the touchpad on laptops). While searching, my sibling starting watching a movie. I minimized the movie and saw that the same thing had happened to the folder I renamed. Also changed was the file layout. It showed the different days and the files modified on those days. So, I was able to fix it by doing: Clicking Organize Layout Menu Bar On the menu bar clicking View Arrange By Folder

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >