Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 369/480 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Copying windows home server backup offsite

    - by Simon
    What ways are there to copy a windows home server backup to an offsite location? I'm talking specifically (and only) about the automated backup of my entire machine, and not the shared network folders. I am 90% working away from home on my laptop which has a 640GB drive so the shared folders are essentially useless to me. I backup every night, but if my house burns down or broken into the I'm in serious serious trouble ! I'm really looking for some alternative way to back up my entire machine - which much not interfere with the reliability or speed by which my WHS backs up my laptop every night. Either a way to 'export' a complete machine backup from the server, or recommendations on non-conflicting software I can backup to a 1TB drive at work are what I'm looking for. Note: I believe that WHS uses its own completely proprietary backup and doesn't use things like any 'backup bit' or 'archive bit'. I just dont want to install some other backup software that will conflict. PS I'm now running Windows 7 and just realized that I should probably check out the backup functionality it gives me. I assume that won't conflict right! Edit: Thanks for the hosted solutions. I'd also appreciate ways to backup to an 'offsite' location that I control - like my office vs. my home. The hosted solutions I think will be too slow or expensive for my needs.

    Read the article

  • Synchronize folders on different computers without cloud and without network just internet

    - by theimmortalbg
    I have two computers with windows 7, one in my home town and one in another town. So they are not in private network but I have internet on both. They have exactly the same file structure. I am searching for program that can keep the data equal. I know about dropbox or google drive but they are cloud and I don't want to use them. Also they are using folder that you should copy your data in it. There is another programs that are like a server, just put something and after that you can download it but I dont need them. I want just to point which folders to be synchronized and the program make the synchronization. The sync can be in real time if the two computers are powered, or after few time when they are powered. Or it can lock another computer synced folder till update is required. At all this is my documents that I want to be synced in all my computers and to be changed from where I want. In fact I can move the updates with flash but if some program save the changes and make them on another computer with one click it will facilitate my work.

    Read the article

  • On Windows machines, what is the typical toolchain for remote maintenance?

    - by Hanno Fietz
    I need to deploy PHP and Python code and the appropriate environment (web server, db server) to remote Windows systems, and I don't know what toolchain would be the equivalent to ssh, scp, bash and the like. So, basically, what I need to be able to do is the following: access remote Windows with the appropriate privileges in a secure manner, like I routinely do with ssh (I don't even know whether that would be a text or graphic interface on Windows). remotely install software: Apache or IIS, MySQL or Postgres, Python or PHP copy files from remote (the application we're deploying) remotely configure the machine to run regular tasks (e. g. checking for updates to the application) automate tasks like downloading files from a designated place The main question is probably how I get onto the machine securely in the first place, and then the rest is general Windows admin knowledge, which probably is too broad a scope to fit into one question. I have years of experience with maintaining Linux boxes and I have used tools of varying sophistication on those, ranging from plain scping of PHP files to deployment of Java application containers and even full VMs with Vagrant. On Windows, I'm a complete noob, and I don't even know where to start. I have installed Apache, MySQL , PHP on a desktop machine maybe twice in my life, that's about it. Bonus points for things that work from a Linux machine at my end, but I could run a VM and do everything from there.

    Read the article

  • What's the best way to clone multiple PCs from one machine?

    - by Jason T.
    Where I work we have dozens and dozens of old ThinkPad laptops. A lot of these can be reused but not for our needs. They have been long since replaced. The higher-ups have decided to donate them to charity. For better or for worse I have been tasked with reimaging them. I took a laptop and installed the factory copy of Windows, updated it, configured it appropriately. Now I'm trying to reimage it to dozens of other laptops. What's some good software to do this? First I used clonezilla to clone the hdd in the laptop to an internal drive in an external enclosure and it worked. Then I tried taking the base image out and connecting it externally to a laptop that needed to be imaged and I got it to work a few times. So far so good, right? Well once I informed my boss of my findings and what I would want to do then the images started to not work on new laptops. One of three things would happen: The Thinkpads would just blink at me and Windows wouldn't load. Or Windows would load but freeze within two minutes. Last but not least the laptops would BSOD during the Windows XP bootup. These laptops are not going to be used by the company. They're going to charity. So can anyone else recommend a way to reimage multiple laptops?

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • Windows7 corrupted profile - prevention exists?

    - by Radek
    I have dedicated Windows7 (not on domain) virtual machine for overnight automation testing. Some commands (mySQLdump, tscon.exe) must be run under administrator account. Last week administrator account's profile was corrupted. I fixed it by renaming it in the registry and logging in as administrator. And today it is corrupted again. I use administrator account only to run above commands via runas. Also the computer is restarted via cmd - shutdown command - quite often. Especially every night before automation testing starts. I checked the comp for viruses - did full scan using avast although I believed that the comp is clean. Any idea how to prevent the profile to get corrupted again? update So the first log entry in event log is today from 1.15am and one of my scripts ran runas command as administrator exactly at 1.15am. It was second time that runas war executed though after the testing started. The same happened second day in a row. Before the testing starts I need to copy one file that is locked. So I run handle.exe from runas to unlock it. That is what I think causing the profile to get corrupted. I am not able to reproduce it by myself. The message from event viewer is Windows cannot load the locally stored profile. Possible causes of this error include insufficient security rights or a corrupt local profile. DETAIL – The process cannot access the file because it is being used by another process.

    Read the article

  • Issue with kernel boot [OVH SERVER]

    - by Conner Stephen McCabe
    Trying to install OpenVZ kernel on Centos 6.3, Yes my kernel is installed i can see it in the /boot folder, yes it is Rhel6 and yes it is all up to date, i checked this with yum update. My issue comes when i reboot my server with that kernel set as the default, it doesn't load, below i shall put a copy of my grub.conf file and my menu.lst file. Grub.conf: default=0 timeout=5 title vzkernel (2.6.32-042stab057.1) root (hd0,0) kernel /boot/vmlinuz-2.6.32-042stab057.1 ro root=/dev/sda1 initrd /initramfs-2.6.32-042stab057.1.img title linux centos6_64 kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 root=/dev/sda1 ro root (hd0,0) Now i shall paste in Menu.lst; # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_stock-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Linux OpenVZ (vmlinuz-2.6.32-042stab057.1) root (hd0,0) kernel /boot/vmlinuz-2.6.32-042stab057.1 ro root=/dev/mapper/vg_stock-lv_root rd_LVM_LV=vg_stock/lv_root rd_LVM_LV=vg_stock/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=l$ initrd /initramfs-2.6.32-042stab057.1.img title CentOS (2.6.32-71.el6.x86_64) root (hd0,0) kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 ro root=/dev/mapper/vg_stock-lv_root rd_LVM_LV=vg_stock/lv_root rd_LVM_LV=vg_stock/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFO$ initrd /initramfs-2.6.32-71.el6.x86_64.img # dummy text Somebody mentioned something about OVH having added a script which changes the kernel settings or something, and suggested that we either remove the script or reinstall using a VNC, but we don't know how to go about doing either of these? Really would be great if you guys could help. Thanks in advance.

    Read the article

  • Photoshop CS5 performance over network drive (cifs)

    - by grub
    Hello Everyone I did install a QNAP NAS TS410 for a customer (professional photographer) with 3 Hitachi Deskstar 7200rpm 2TB disk configured as RAID5. The NAS and the workstations are connected over a Gigabit network. He and his co-worker are accessing the photos (about 1TB of photos) over a mapped network drive from their windows machines (Windows XP - 32bit and Windows 7 Ultimate - 32bit). Both are using Photoshop CS5 to edit the photos. The problem is that to save a edited photo takes a really long time, it takes about 3 times as long to save a photo as to open it. After some tests I can exclude the network, the NAS and the windows machines as source of the issue. I think the problem is the Photoshop software and its handling of the network drives. Officially network drives are not supported by Adobe. I do not have any experience with the Adobe products, especially with Adobe Photoshop CS5. What are your recommendation to solve the performance issue? Should my customer copy the photos to the local drive, edit them and upload them again to the network drive or is Adobe Drive or Adobe Version Cue the answer? One requirement is that the photos need to be accessible / editable from both computers even when one of them is offline. Adobe Version Cue needs a dedicated service running to be usable, so this solution is not possible as far as I understand the Cue software. Thank you for your input to this issue and have a nice day :-) Greetings grub

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Replace Linux Boot-Drive | ext3 to btrfs

    - by bardiir
    I've got a headless server running Debian Linux currently. Linux vault 3.2.0-3-686-pae #1 SMP Mon Jul 23 03:50:34 UTC 2012 i686 GNU/Linux The root filesystem is located on an ext3 partition on the main harddrive. My data is located on multiple harddrives that are bundled to a storage pool running with btrfs. UUID=072a7fce-bfea-46fa-923f-4fb0827ae428 / ext3 errors=remount-ro 0 1 UUID=b50965f1-a2e1-443f-876f-578b5f93cbf1 none swap sw 0 0 UUID=881e3ad9-31c4-4296-ae60-eae6c98ea45f none swap sw 0 0 UUID=30d8ae34-e2f0-44b4-bbcc-22d761a128f6 /data btrfs defaults,compress,autodefrag 0 0 What I'd like to do is to place / into the btrfs pool too. The ideal solution would provide the flexibility to boot from any disk in the system alike, so if the main drive fails I'd just need to swap another one into the main slot and it would be bootable like the main one. My main problem is, everything I do needs to result in a bootable system that is open to ssh logins via network as this server is 100% headless so there is no possibility to boot it from a live cd or anything like that. So I'd like to be extra sure everything works out fine :) How would I best go about this? Can anybody hint me to guides or whip something up for these tasks? Anything I forgot to think about? Copy root-data into btrfs pool, adjust mountpoints,... Adjust GRUB to boot from btrfs pool UUID or the local device where GRUB is installed Sync GRUB to all harddrives so every drive is equally bootable (is this even possible without destroying the btrfs partitions on the drives or would I need to disconnect the drives, install grub on them and then connect them back with a slightly smaller partition?)

    Read the article

  • Accessing clearcase view drive from virtual machine is slow

    - by PermanentGuest
    I have a windows XP virtual machine running under a Windows XP host. On the host : On the host clearcase 7.1.1.2 is installed. I have a dynamic view mapped onto some drive. The view has certain VOB/directory structure where my application DLLs from the nightly build and config files are stored. I run my application on the host machine which uses the DLLs and config files from the VOB and everything runs smooth. Now I want to move this set-up to a virtual machine. On the guest : I'm running the guest with a vm-player. I don't want to install clear-case on this as I don't want to expose this machine onto the network. The network setting in the guest is 'host-only'. I have mapped the host's clearcase view drive as a shared folder and I'm able to access this drive from the virtual machine. Also, the application is running. However, the problem is that the access of the clearcase drive from the virtual machine is very slow. I can experience this from the windows explorer. Due to this, the starting of my application takes several seconds in the virtual machine while on the guest it comes up pretty fast. My question is : Is there any way to speed up the performance? I have managed to copy some of the DLLs which don't change frequently to the virtual machine to improve the performance. However, there are still lot of DLLs which have to be taken from the clearcase drive as they change frequently. VMplayer version is : VM Player 3.0.1 build-227600 Both guest and host is : Windows XP service pack 3 Host clearcase is : clearcase 7.1.1.2

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Why can't I get out of display mirror mode?

    - by Roy Smith
    I've been running Ubuntu (10.04.1 LTS, 64-bit) for a while and just replaced my hardware with a faster machine with an ATI Radeon HD 5700 video card. I've got twin 1920 x 1080 displays. I downloaded the latest driver (ati-driver-installer-10-9-x86.x86_64.run) from the ATI web site and installed that. I've gone through a few rounds of playing with /etc/X11/xorg.conf, and can't get things right. At the moment, it's in display mirroring mode, and I can't figure out how to get it out of mirror mode. If I run Monitor Preferences, there's a "Same image in all monitors" checkbox. If I uncheck that, the little preview window switches to show two monitors. When I click Apply, it asks me to log out and log back in again. When I do that, I'm right back to mirrored mode. What's really weird is that I'm currently running a copy of xorg.conf from a coworker's machine. He's got identical hardware, and his display works fine. So, I'm inclined to think there's something else going on other than the conf file. Any ideas what might be wrong?

    Read the article

  • Backup Picasa 'people' tags data

    - by pelms
    OK, so I've spent a fair amount of time putting names to faces in Picasa 3.5 but in a few days (hopefully) my copy of Windows 7 should arrive and I'll need to reinstall Windows. So, does anyone know what I need to backup so that I don't have to re-enter all those name tags? N.B. I'm on Windows 7 RC and know that I don't have to do a clean reinstall but I would prefer to. Outcome: I clean installed Windows 7 and downloaded and installed Picasa. Unfortunately, the download link on the UK Picasa homepage still pointed to Picasa 3.0 (rather than 3.5) which doesn't have face recognition. This scanned my photos folders and overwrote the picasa.ini files along with the people information   :¬( Fortunately I'd backed up the photos before installing Win 7, so after uninstalling Picasa 3.0 (along with it's database), restoring the photos from backup and installing Picasa 3.5, I finally got my face names back. Extra... Google has now posted advice on how to migrate to Windows 7 and keep your Picasa database, meaning that it will not need to rescan you photos and will retain all information about then including name tags. They have a method for upgrading and for a clean install of Win 7. Basically you need to back up: "C:\Users\%username%\AppData\Local\Google\Picasa2" and "C:\Users\%username%\AppData\Local\Google\Picasa2Albums"

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Configuring IE to resolve DNS at the proxy rather than locally.

    - by dankilman
    With the intention of tunneling web traffic through an SSH connection, the following has been done: I've manually configured a PAC file in IE7 in the LAN Settings dialog. I've verified that traffic is routed through my SSH tunnel that is setup for SOCKS5 dynamic port forwarding. I see that IE7 is always trying to resolve the name locally first. What I'm looking for is the ability to have the DNS name resolved at the proxy, rather than locally by the browser. There's a setting in Firefox that specifies DNS remote resolution, and Safari does it automatically. I've verified correct operation for these 2 other browsers. It would be nice if I could get IE to work also. This is for reference so you could understand where does the question originate from. Notice: The question was actually found by the help of google but with no answers available. Considering how it is exactly my question I figured I should just copy/paste over here because I don't think I could describe any better (there is a small introduction though).

    Read the article

  • Messed up partitions... system will not boot!

    - by someguy
    I did a really dumb thing. cfdisk threw an error at me saying "FATAL ERROR: Bad primary partition 3: Partition ends in the final partial cylinder", so I installed Partition Table Doctor to see if I could fix the problem. When the program started up, it told me there were problems with my partitions, and asked if I wanted them fixed (cannot remember real message, but I believe it had something to do with the cylinder boundaries), so, blindly, without thinking of the consequences, I did. Now, my system will not boot. I tried booting from the Windows 7 installation CD. I went to install a fresh copy, but it said that "No drives were found". I then opened up diskpart. According to diskpart, there is only one partition, containing one volume, assigned the letter "C". Before, I had four partitions! It is also saying that the file system is RAW. Is there any way I can fix this? I have important data that I do not want to lose. Later on... I tried fdisk with the option -l, which lists the partition table(s), and this is what I got: Ignoring extra extended partition 4 Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 64 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x163df116 Device Boot Start End Blocks Id System /dev/sda1 6 18 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 18 7851 62918572+ 7 HPFS/NTFS /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30403 139203193 7 HPFS/NTFS I don't know if this will help, but it's extra information, at least. Also, this is how I had my partitions: 40MB (Unallocated) 100MB (System Reserved) 60GB (Windows, C:) 40GB (Was reserved for secondary OS) ~132GB (Home, E:)

    Read the article

  • Need to restore emails that Outlook deleted from the mail server

    - by Mugen
    I wanted to use POP mail for my Yahoo mail account. So I went to the Yahoo mail settings and enabled POP feature. I installed Microsoft Outlook 2007 and added the settings for my Yahoo mail account. Then I performed a sync. I successfully received all the emails from Yahoo account into my pc. But when I logged into my Yahoo mail online, to my horror all the mails were deleted. It seems someone in Microsoft for some reason decided to keep the default setting unchecked for "Leave copy of messages on the server". This has been a very old account and I need to have all the mails also present on the mail server so I can access it anytime and future purposes. Is there anyway I can restore these emails on the server again? Any ideas how I can get it back to the way it was? I have been googling this for quite sometime now but I'm not able to find it. Any helpful suggestions are welcome. Thanks.

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • How can I move linked Word/Excel files without breaking the links under Windows 7?

    - by DOUG NEEDHAM
    I currently operate under Windows XP and have multiple links between my Word and Excel files. I have to upgrade to Windows 7. When the .doc and .xls files are converted to .docm and .xlsm, respectively, the links no longer work. The Word document is still attempting to point back to the old .xls file rather than the new file. Also, creating new links between Word and Excel within Office 2010 doesn't seem to work. I create the new link, switch it from "Auto" to "Manual" and everything works fine. But when I copy the files to another folder, the Word document is still trying to link to the file in the previous folder rather than the new folder. This always worked in Windows XP. I've been using linked Word/Excel documents for 10+ years and have never really had a problem. I'm very careful to maintain Word and Excel filenames when moving the files to a new folder. The process has always been to 1.) move the files, 2.) update the links, 3.) rename the files, and 4.) update the links again. It's my understanding that under Windows XP, links between Word and Excel are relative. But under Windows 7 (and Office 2010?), those same links become fixed.

    Read the article

  • Export-Mailbox - fails with large folders

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes? Update: What I've learnt so far, is that folder with more than approx 3000 messages will generate errors. If mail retention is set (default 30 days), Export-Mailbox will scan all messages whether these were deleted in previous runs or not, and date restriction to limit number of messages will not work. To avoid errors, I've switched off deleted message retention for the mailbox, and moved the messages from one large folder to multiple folders, and moved these one by one...

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • Linux: Can't overwrite files on samba store

    - by jonescb
    I'm using CentOS 5.5 with smbclient 3.0.33-3.28-el5 (latest version in repo), and I can't overwrite files in my Samba store. I am not the admin for the Samba server, so there isn't anything I can do server side. But I do have write permission to the server. I know the server runs Windows XP or Server 2003; I don't know. I can delete the file, and then copy the new version over, but I can't overwrite it. Using the cp command I'll get this error: [jonescb@localhost ~]$ cp foo.txt /mnt/si_storage/foo.txt cp: cannot create regular file `/mnt/si_storage/foo.txt': No such file or directory` And if I edit a file on the server using vim, I can save it once, but if I save it again I get this: "/mnt/si_storage/foo.txt" E212: Can't open file for writing This is my /etc/fstab entry for the samba server: //192.168.1.2/SI_STORAGE /mnt/si_storage cifs username=myuser,password=mypass 0 0 Edit: I can overwrite files just fine on my XP machine. The CentOS box is the only one having problems.

    Read the article

  • Multiple Remote Desktop Connection in Windows Server 2003?

    - by Joel Bradley
    My company is transitioning all user PC's to Windows 7 64-Bit in anticipation of the 2014 cutoff for Windows XP support. So far everything has been going great except for one specific piece of software that will not run in Windows 7. The current plan is to give everyone a cheap secondary PC to run this software but I feel that's a little much for software that's not even used all the time, although it is essential. I've suggested we install virtual machines but the company does not want to pay for the XP licences. I have access to a copy of Windows Server 2003 that is no longer being used and I was wondering if it was possible to create a remote desktop server. I know it can be done on a one-to-one basis, but this is a 15 person helpdesk. I'd like to be able to support multiple remote dekstop sessions, each with their own logins and dekstops. Is this possible? Are there any other alternatives to my issue? FYI, I've been told that XP mode is only free for consumers. There are costs when used in a corporate environment.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >