Search Results

Search found 13165 results on 527 pages for 'hyper v tools'.

Page 445/527 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • What other ways can I load balance EC2 servers without using Elastic Load Balancing?

    - by undefined
    I have a web application that consists of a web server managed by a web hosting firm, a set of EC2 instances in amazons cloud and a MySQL database (hosted on the webserver). MySQL is behind a firewall and is set to allow access from Localhost and from a single IP address which is an Amazon Elastic IP address that is attached to the EC2 instance I have been running up to now. The problem is that I want to look at my scaling up and load balancing strategy for my EC2 instance. To this end I have been investigating the Elastic Load Balancers and Autoscaling tools that Amazon provides and have managed to set this up fine but for one thing - connecting to the MySQL database running on my webserver. I realised (thanks to answers on Serverfault) that I needed to check firewall settings and add the IP address for the load balancer, however Elastic Load Balancers provide you with a DNS name, not an IP address and infact the IP addresses change over time so this will not work. I have been told by the company hosting the database that the way the firewall works is to look up the IP address of the DNS name and store the IP rather than the DNS name. so basically this will not work and the only way to allow access would be to open up the SQL port to allow access from anyone! Is this a viable idea? Should I look at moving my database into the cloud? Is there another firewall that the server company can use? Should I find another way of load balancing (if so what?) tricky one eh? any help appreciated!

    Read the article

  • Advice on cloning disk

    - by hks
    I'm going to buy a second disk for backup, the same size as my laptops. I want to mount it in a casing via usb and backup an entire hdd every soemtime. That's because I want the posibility to just switch drives in case of something goes wrong. I'm using Linux and obviously the right tool seems to be dd. The thing is that my laptop drive has a speed of around 50-70 MB/s and usb 2.0 is 57 MB/s. So to copy my 250GB disk should take me more than 1 hour if I'm lucky. I can't wait this much. I want some differential backup. I read one of JWZ articles. In it he gives more details for using rsync on Mac. He writes that there is possibility of making rsync'ed disk bootable. So my question is: how to make rsync'ed hdd bootable under Linux or are there other 'quick backup' tools for Linux that would allow me to just swap drives? Or should I just stick to dd :( ?

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • How to prevent the command prompt from closing after execution?

    - by Sk8erPeter
    My problem is that in Windows, there are command line windows that close immediately after execution. To solve this, I want the default behavior to be that the window is kept open. Normally, this behavior can be avoided with three methods that come to my mind: Putting a pause line after batch programs to prompt the user to press a key before exiting Running these batch files or other command line manipulating tools (even service starting, restarting, etc. with net start xy or anything similar) within cmd.exe(Start - Run - cmd.exe) Running these programs with cmd /k like this: cmd /k myprogram.bat But there are some other cases in which the user: Runs the program the first time and doesn't know that the given program will run in Command Prompt (Windows Command Processor) e.g. when running a shortcut from Start menu (or from somewhere else), OR Finds it a little bit uncomfortable to run cmd.exe all the time and doesn't have the time/opportunity to rewrite the code of these commands everywhere to put a pause after them or avoid exiting explicitly. I've read an article about changing default behavior of cmd.exe when opening it explicitly, with creating an AutoRun entry and manipulating its content in these locations: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor\AutoRun HKEY_CURRENT_USER\SOFTWARE\Microsoft\Command Processor\AutoRun (The AutoRun items are _String values_...) I put cmd /d /k as a value of it to give it a try, but this didn't change the behaviour of the stuffs mentioned above at all... It just changed the behaviour of the command line window when opening it explicitly (Start-Run-cmd.exe). So how does it work? Can you give me any ideas to solve this problem?

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Having an issue trying to get Gigabit speed across my network (Ubuntu Server)

    - by user94217
    I've just started looking into the network speeds at my office, the entire network is setup to be "Gigabit". This includes Gb switches, Gb Network cards and Cat 5e cabling. I'm not expecting the full speed, I just want more than ~90 Mb/s. I've been running some tests with iperf the linux tools and checking the hardware with ethtool. I have 3 servers and when doing my checks/test I discovered that the two backup servers can access each other at around 450 Mb/s but when using either one of them to connect and test the main server, I only get the 90Mb/s even though ethtool shows the networking card running at 1000/Full. The only difference between all the server/networking cards is the "Port" which ethtool shows. On the two backup servers the "Port" is shown as MII yet on the other it's shown as "Twisted Pair". When using ethtool -s to manually set the "Port" to MII on the main server, it looses all connectivity and does not show "Speed" or "Duplex". Anyway, Am i doing something wrong? Is there a specific reason my main server cannot use Gb when there appears to be no difference except the "Port"?

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • Is it possible to use ffmpeg to trim off X seconds from the beginning of a video with an unspecified length?

    - by marcelebrate
    I need to trim the just the first 1 or 2 seconds off of a series of FLV recordings of varying, unspecified lengths. I've found plenty of resources for extracting a specified duration from a video (e.g. 30 second clips), but none for continuing to the end of a video. Both of these attempts just yield a copied version of the video, sans desired trimming: ffmpeg -ss 2 -vcodec copy -acodec copy -i input.flv output.flv ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.flv output.flv The thought on the second one was: perhaps if I specified a length beyond what was possible, it'd just go to the end. No dice. I know it's not an issue with codecs or using seconds instead of timecode since the following worked a charm: ffmpeg -ss 2 -t 5 -vcodec copy -acodec copy -i input.flv output.flv Any other ideas? I'm open to using other (Windows-based) command line tools, however am strongly favoring ffmpeg since I'm already using it for thumbnail creation and am familiar with it. If it helps, my videos will all be under 2 minutes.

    Read the article

  • IIS no longer saving session variables

    - by John
    I'm running IIS v7 on a Win7 development machine. I have PHP code that saves session variables and calls them back later. This has been working on this machine for some time. For some reason now, the session variables dissapear immediatly after saving. Code that used to work fine on http://localhost/, suddenly now does not. I have tested different browsers - the vars dissapear regardless of browser. I have tested identical code on different servers. The problem exists only on this development machine. I tried some code that saves a session var, then reads it back and displays it, then shows a link to click on to read it back and display again. What happens is the session var DOES get written and read back and displayed ok. But when you click the link to view it again, it's gone. I don't recall making any changes to IIS. But I did run several malware scanners and clean-up tools. Is anyone aware of any setting in IIS that disallows session vars? Any other throughts?

    Read the article

  • puppet execution of a python script where os.system(...) command is not working

    - by philippe
    I am trying to manage Unix users with puppet. Puppet provides enough tools to create accounts and provide authorized_keys files for instance, but no to set up user password, and it tell to the user. What I have done is a python script which generate a random password and send it to the user by email. The problem is, it is not possible to launch passwd Unix command with python, I have then written a bash script with the command: echo -ne "$password\n$password\n" | passwd $user passwd -e $user Launched manually, the script works fine and the created user has its password sent by email. But when puppet launches it, only the python script gets executed, as if the os.system('/bin/bash my_bash_script') is ignored. No error is displayed. And the user gets its password, but the passwd commands are not launched. Is there any limitation with puppet preventing to perform what I described? Or, how can I otherwise change the user account, its expiration, and send password by email? I can provide more information, but right now, I don't know which are accurate. Many thanks!

    Read the article

  • GPG - why am I encrypting with subkey instead of primary key?

    - by khedron
    When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me?

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • Weblogic WLST classpath

    - by user43736
    When I run the WLST .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar: /directory/ols_wls/utils/config/10.3/config-launch.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar: /directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar: /directory/ols_wls/wlserver_10.3/server/lib/webservices.jar: /directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar: /directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin: /directory/ols_wls/modules/org.apache.ant_1.7.0/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/bin: /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin: /directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • "Windows failed to start" loop with 0xc0000225. No install discs, EasyRE/USB iso hasn't worked

    - by mvidaure
    I've been suffering from this "Windows failed to start" loop with 0xc0000225 for 3 days now and I still can't fix it. The major problem is that I don't have any sort of installation disc. However, I have tried EasyRE via both CD and USB but both result in the same problem.  I try to perform an 'Automated Repair' on my computer and I get in red text "The selected partition is corrupted and could not be accessed or repaired. Please select a different drive to continue." It is also labeled as NO under Active. Since I do not have a the installation discs, I made a USB with a Windows_7_Recovery_Disc  iso (as shown here http://www.sevenforums.com/tutorials/31541-windows-7-usb-dvd-download-tool.html) but it also doesn't work. I get a blue screen that says "RECOVERY You pc needs to be repaired. The application or operating system could not be uploaded because a required file is missing or contains errors... File:\WINDOWS\system32\winload.efi Error code: 0xc0000225 You'll need to use the recovery tools on your installation media. If you don't have any installation media, contact your system administrator or PC manufacturer." Thanks in advance! Miguel

    Read the article

  • 2 Web Servers, 2 Domains, 1 WAN IP configuration

    - by Tillman32
    The Problem: I have 2 domains, 2 web servers, 1 WAN IP. This is on a home network, so I don't have any crazy router or anything with the right tools, at least I don't think I do. I have a Dlink DIR-655 router. I need to be able to forward the website to the right box. 1 server is hosting www.site1.com, and the other should be hosting, www.site2.com. I forwarded the domain, to a port that is forwarded to that box. wanip:1235, 1235 is then forwarded to the internal IP of site2's server. But when I go to www.site2.com I get the 404 page of www.site1.com. I need www.site2.com to go to the right server. Is this something I can do in the virtual hosts of www.site1.com or is it something else. Also, if this isn't entirely possible, or if its a big process, I can just move www.site1.com to the www.site2.com's server, and use virtual hosts, that is an option that I would be okay with.

    Read the article

  • Problems in Table of Contents formatting

    - by ChrisW
    Two questions about captions in Word (they are related, hence the same post): Using Word 2010 (and its inbuilt equation editor) I've got figure captions which contain equations (well, actually, they represent chemical equations, such as nitrate, for which the correct representation is NO3- where the 3 is subscript and the - is superscript, but in the same column). However, when I generate a figure list, the equation displays as NO3- (with no subscript or superscript) - Word knows it's an equation though (the Equation Tools design ribbon/tab is displayed when I click on the NO3-). I've tried changing it from Professional to Linear and similar other obvious options, but still can't get it to display correctly. File to show this problem in action: http://dl.dropbox.com/u/101867759/EqtnTest.docx - note how the (chemical) equation for nitrate is rendered correctly in the 'caption' on Page 2, but not in the ToC on page 1. I have another caption where the whole figure is included in my list of figures. When I double click on the caption in my text, the caption is highlighted (as expected), but so is the figure (this doesn't happen with any of my other figures) so I assume that the figure has been 'linked' in some way to the text - how do I remove this link?

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • Windows 7: moved system partition, need to update boot partition

    - by Actorclavilis
    So, I have a decently standard Windows7/Ubuntu dual-boot setup, and (since Ubuntu is my usual operating system) I found I needed to grow my Ubuntu partition and shrink my W7 partition. Originally, my system (500G) looked like this: W7 Boot Partition (1.5G) Ubuntu (around 240G) W7 (same as Ubuntu) (on an extended partition, all by itself) Swap (rest of disk, around 16G) Now I'm no stranger to partitioning and filesystem tools, especially GParted, which I used on a Linux boot disk. After my partition editing, the partitions are laid out the same, except the Ubuntu partition is now 407G and the W7 partition is smaller to compensate. I had supposed, based on http://www.gparted.org/faq.php, that I would be able to run the W7 install disk in recovery mode and have it deal with the rearrangement, then possibly reinstall GRUB or something. Well, now the W7 install disk doesn't even see my W7 installation. All my files are there, the NTFS is perfectly clean, no problems there, but the install disk won't notice it. (Of course, the GRUB entry works fine but the W7 boot partition (which I didn't change) refuses to boot it.) So, basically, any ideas on how to fix this? I don't especially want to rerun the entire install procedure because I'll have a bunch of programs to reinstall (never mind redoing GRUB), but I fear that might be the only option. Thanks.

    Read the article

  • Debian on HP ProLiant server hangs (disk i/o is my guess)

    - by Martin
    I installed Debian (2.6.32-5-amd64) on my HP ProLiant MicroServer (purchased recently.) I also added 3 2tb hd in zfs. I've experienced several server froze. Sometimes it showed Soft lockup CUP stuck for 61s! Today I experienced a different problem (I think) and the message looked like this [431336.200002] Call Trace: [431336.200002] [<ffffffff812fcc7c>] ? _write_lock+0xe/0xf [431336.200002] [<ffffffff810d7a86>] ? __vmalloc_node+0x99/0xe2 : : and (in different screen) [431354.222318] Node 0 DMA32 free: 2064kB min:5520kB low:69900kB high:8280kB active_anon:181648kB inactive_anon:61728kB active_file:313152kB inactive_file:832456kB unevictable: 0kB isolated(anon): 0kB isolated(file):0kB present:1922596kB mlocked:0kB dirty:72kB writeback:0kB mapped:25620kB shmem:344kB slab_reclaimable:34460kB slab_unreclaimable:31400kB kernel_stack:2288kB pagetables:7556kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [431354.222431] lowmem_reserve[]: 0 0 0 0 : : Is this a hardware problem? What tools/methods can I find out the source of the problem? I've used Debian for years but never had problem like this.

    Read the article

  • Windows 7 Synchronize folders BUT while some files are open

    - by Nick
    I need some way to synchronize 2 drives I have. I want to do this ofter (once a day or so) There is the main drive that I want to clone/synchronize on a backup hard disk. The problem is in that there is an open TrueCrypt file mounted as a drive, and its live. If you don't know what true crypt is, basically you create a file on a hard disk and that file is encrypted. It's constantly open and modified live. Also Its pretty large. 100GB + I will use freefilesync . There are many tools that can do that. My question is, is it safe to copy the encrypted file while its open ? Does the software freeze the file in one state and copies it ? Or will I get a corrupted file on the other drive ? It was not clear to me how windows handles that. I read something about shadow copy, and the software says that it supports this. Does anyone know something on the matter that can help me ? or some software that will work ok in my scenario ? Thanks

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • LDAPS being redirected to 389

    - by Ikkoras
    We're trying to perform an LDAPS bind to a server which blocks 389 with a firewall so all traffic must travel over 636. In our test lab we're connecting to a test ldap (located on the same server) which does not have this firewall so both ports are exposed. Running ldp.exe on the test server we generate the trace below which seems to suggest that it is successfully binding over 636. However if we monitor the traffic with wireshark all the traffic is being sent to 389 with no attempt to even contact 636. Other tools will bind only with SSL on 636 or without SSL on 389 whjich seems to suggest it is behaving correctly but Wireshark shows 389. Only the test server we are using RawCap to capture the local loopback traffic. Any ideas? 0x0 = ldap_unbind(ld); ld = ldap_sslinit("WIN-GF49504Q77T.test.com", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 0 = ldap_connect(hLdap, NULL); Error 0 = ldap_get_option(hLdap,LDAP_OPT_SSL,(void*)&lv); Host supports SSL, SSL cipher strength = 128 bits Established connection to WIN-GF49504Q77T.test.com. Retrieving base DSA information... Getting 1 entries: Dn: (RootDSE)

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >