Search Results

Search found 29203 results on 1169 pages for 'state machine workflow'.

Page 150/1169 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Copy a harddrive from a failed desktop machine using a second working one. [closed]

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • How to drop all subnets outside of the US using iptables

    - by Jim
    I want to block all subnets outside the US. I've made a script that has all of the US subnets in it. I want to disallow or DROP all but my list. Can someone give me an example of how I can start by denying everything? This is the output from -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:ftp state NEW DROP icmp -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination And these are the rules iptables --F iptables --policy INPUT DROP iptables --policy FORWARD DROP iptables --policy OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp -i eth0 --dport 21 -m state --state NEW -j ACCEPT iptables -A INPUT -p icmp -j DROP Just for clarity, with these rules, I can still connect to port 21 without my subnet list. I want to block ALL subnets and just open those inside the US.

    Read the article

  • ssh over a tunnel in order to configure auto login

    - by Vihaan Verma
    I m trying to copy the id_rsa.pub key to the server. The server in my case also has a virutal machine called dev which runs on the host machine. I copied the id_rsa.pub key to the host for auto log in using this command. ssh-copy-id -i ~/.ssh/id_rsa.pub vickey@host which worked fine and I can auto log in to host. I also wanted to auto log in to the dev machine. I know I can just copy the contents of authorized_keys from the host machine to the dev machine but I m looking for a command line of doing things. Creating a tunnel seemed like the solution ssh vickey@host -L 2000:dev:22 -N now when I tried ssh-copy-id -i ~/.ssh/id_rsa.pub vickey@localhost -P 2000 the password that worked here was of my local machine , I expected it to ask me password of my dev machine. The above command adds the pub key to the local machine and not to the dev machine. However this commands asks me for the dev password and copies the files. scp -P 2000 vickey@localhost:/home/vickey/trash/vim . vickey@localhost's password: vim 100% 111 0.1KB/s 00:00 How do I do the same with ssh-copy-id ?

    Read the article

  • Restoring the exact state of a linux install to a different laptop with different sized drives and other hardware

    - by user259774
    I have an IBM running a Manjaro install that has already been used and settled into, with packages installed, browser profiles, etc, etc. The drive is 60gb, and it has a swap partition and an ext4 root partition. I need to move this profile to a Toshiba computer with a 320gb drive. How should I go about this? My inclination would be to shut down the toshiba, boot a live linux system, dd the whole 60gb drive to a file, boot the toshiba to a live system, then dd the file to its 320gb drive. Would this work? I know that it wouldn't with windows, but I believe this is an artificially imposed limitation from Microsoft. Is this correct, or is Linux similarly limited? If not, how could I go about this? Would clonezilla work, or would the hardware disparities prevent it from working?

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Is it a good idea to have the operating system on a solid state drive?

    - by Kenji Kina
    There is something I don't quite understand. I know a SSD helps with OS load times, but I'm not sure if all this boost is only noticeable/interesting when booting, or gives an all around considerably better experience thereafter. I am interested in having a quick and responsive environment after booting, which leads me to think that it'd be better to spend the SSD capacity in my most used apps (and the page file? Another inside question) and not the OS itself. This, of course, means that I don't know just how much the OS reads/writes its files during normal usage. So, how good an idea is it to dump the whole 20GB+ of Windows 7 OS into the SSD (considering the hefty price per GB of SSD capacity) if I can put up with the usual hard disk boot times? Would I be missing on a lot if I didn't?

    Read the article

  • How to best convert a fully encrypted drive into a Virtual Machine?

    - by SiegeX
    I have a Windows XP laptop that uses GuardianEdge's Encryption Plus to fully encrypt the drive from bootup. What I would like to do is install a much larger (unencrypted) hard drive with Windows 7 on it and turn this fully encrypted drive into a Virtual Machine that can be ran in either Virtualbox or VMWare on the Windows 7 host. I've read many howto's that talk about using an imaging tool like Acronis True Image to image the drive then passing that through VMWare's VCenter Converter to turn it into a format that VMWare can understand. Unfortunately this seems to all far apart when you are dealing with a fully encrypted drive because Acronis cannot recognize the file system and attempts to do a sector-by-sector copy of the entire hard drive. This is extremely wasteful since the drive is 120GB but the file system is only using 10GB of that. Even if I were OK with going with an inefficient 120GB sector-by-sector copy, I'm not sure that this would even work under VMWare or Virtualbox. Unfortunately, the Guardian Edge boot-time login comes up only after the hard drive has been selected as the boot device; preventing me from being able to decrypt the drive prior to booting an Acronis True Image CD so that it can recognize the underlying file system. I'm sure I'm not the first person to want to do this but I am having a heck of a time finding solutions to this problem. All suggested/answers welcomed. Thanks

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Can't install mysql 5.1 on a windows machine because the last install left artifacts.

    - by Zombies
    After uninstalling mysql 5.1 (64 bit version) I cannot install the win32 version! Apparently the devs felt it neccasery to leave helpful artifacts behind? I have rebooted my machine but no effect.. Running this: C:\Users\User1>net start mysql The MySQL service is starting. The MySQL service could not be started. A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly. And ran this: C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin>mysqld --console 100213 10:52:58 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485760 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 100213 10:52:59 [ERROR] Plugin 'InnoDB' init function returned error. 100213 10:52:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100213 10:52:59 [ERROR] Unknown/unsupported table type: INNODB 100213 10:52:59 [ERROR] Aborting 100213 10:52:59 [Note] mysqld: Shutdown complete Update: For some reason it looks like it is installing the 32bit DB into the old 64bit directoy.... will look into this... (the bin directory is going into the 32 bit program files directory).

    Read the article

  • What is the state of ext3 support in Mac OS X 10.6? [closed]

    - by gzuki
    Possible Duplicate: Mount ext2/ext3 in Mac OS X Snow Leopard I have a 1tb hard drive, I want it to have one partition that can serve as an interchange between linux (ubuntu) and mac (snow leopard). HFS+ scares me a bit, and I can't seem to get a clear picture on whether or not something like fuse can reliably write ext3 partitions in mac. Any good advice on this topic? Should I just pick HFS+ or ext3 and hope for the best (or just deal with only getting read-only on one OS)?

    Read the article

  • How do I trouble shoot a program which regularlly falls into a not responding state?

    - by Dave
    Lately I've been using Visual Studio 2008 and about once a day, sometimes more it will lock up. What advanced techniques can I use to determine what is causing the problem? I believe that it's one of the plug-ins I'm using *cough*Resharper*cough* but I'd like to be sure. I've been losing work and I'd like to file a bug report somewhere but I'm not seeing anything in any of my event logs which looks suspect to me. Working on a Windows XP box.

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Is it possible to prevent the win7 sleep state while using spotify?

    - by Skadlig
    Does anyone know if there is a way to prevent windows 7 to go to sleep while using Spotify? I have read the answers in this question but if it's possible I'd rather not resort to start a third party program like insomnia every time I want to listen to music. So are there a setting or a registry entry buried somewhere deep in windows that allows you to do this? Either for a group like "all audio" or for specific programs?

    Read the article

  • Is it wise to use SSHDs (Solid state hybrid drives) on a server?

    - by Seb
    I have a bunch of servers with very heavy I/O that currently use SATA3/SAS drives, but do suffer from I/O wait on the SATA drives, and I have just been alerted to the existence of SSHDs which cost the same for 1TB as the 1TB SATA drives that we currently use. However, previously (until Seagate shipped their first 3.5" SSHD in March) they seemed to be exclusively for Netbooks/Notebooks, which leads me to suspect they're not exactly built for the heavy I/O they'd be in for with my servers. So, would an SSHD give me a performance boost over my SATA3 drives in a heavy I/O environment (such as multiple very large high speed file transfers) or is it best to stick with SATA3 with I/O wait??

    Read the article

  • How can I get write permission for the Web (Inetpub) directory on a new Win 7 machine?

    - by marcipollo
    I mirror my Web site on my laptop, and am trying to move the mirror site to a new laptop. I copied the files to the Inetpub directory, and can view them perfectly, but they are read-only (the check-mark is grey, not black), and I cannot change the permission. When I un-check the read-only attribute on the Inetpub directory, and click "apply" it displays a dialog box stating that I need administrative permission to change the attributes. (I am logged in as an administrator). When I click "continue," it pops up another dialog box saying access is denied to the attributes of the file: c:\inetpub\custerr\en-us\500-100.asp That dialog box has an "ignore" button, and if I click that, it appears to work through the directory tree setting the permissions. It leaves all of the files (leafs) set to "read-write," but the directories remain "read only." I am using 64-bit Windows 7. I stopped the IIS service while doing all of this. Might it have something to do with the fact that I copied the files from a different machine in the workgroup (my old laptop)?

    Read the article

  • Packets marked INVALID in FORWARD rule

    - by Raphink
    I have a firewall that has 3 IP aliases on 1 physical interface. Packets get dropped between these 3 interfaces (either ICMP, HTTP, or anything else). We tracked it down to these packets being marked INVALID in the FORWARD rule and dropped due to the this rule: chain FORWARD { policy DROP; # connection tracking mod state state INVALID LOG log-prefix 'INVALID FORWARD DROP: '; mod state state INVALID DROP; mod state state (ESTABLISHED RELATED) ACCEPT; } (That is, we see the INVALID FORWARD DROP logs in dmesg) What could be causing this?

    Read the article

  • outlook iptables configuration

    - by mediaexpert
    I've a Debian mail server, but only the outlook users can't be able to download the emails. I've seen a lot of post about some kind of forwarding port configuration, I've tried some commands, but I don't be able to solve this problem, please help me. below INPUT and FORWARD iptables: Chain INPUT (policy DROP 20 packets, 1016 bytes) pkts bytes target prot opt in out source destination 60833 16M ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 state NEW,ESTABLISHED 18970 971K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spts:1024:65535 dpt:110 state NEW,ESTABLISHED Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp dpt:110 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 192.168.1.0/24 0.0.0.0/0 tcp dpt:110 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110

    Read the article

  • iptables block everything except http

    - by arminb
    I'm trying to configure my iptables to block any network traffic except HTTP: iptables -P INPUT DROP #set policy of INPUT to DROP iptables -P OUTPUT DROP #set policy of OUTPUT to DROP iptables -A INPUT -p tcp --sport 80 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT The iptables output (iptables -L -v) gives me: Chain INPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 4 745 ACCEPT tcp -- any any anywhere anywhere tcp spt:http state RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 2 330 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http state NEW,ESTABLISHED When I try to wget 127.0.0.1 (yes i do have a web server and it works fine) i get: --2012-11-14 16:29:01-- http://127.0.0.1/ Connecting to 127.0.0.1:80... The request never finishes. What am I doing wrong? I'm setting iptables to DROP everything by default and add a rule to ACCEPT HTTP.

    Read the article

  • Iptables rules make communication so slow

    - by mmc18
    When I have send a request to an application running on a machine which following firewall rules are applied, it waits so long. When I have deactivated the iptables rule, it responses immediately. What makes communication so slow? -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -i ppp+ -j ACCEPT -A INPUT -p udp -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m udp --dport 4500 -j ACCEPT -A INPUT -p udp -m udp --dport 1701 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i lo -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A FORWARD -i ppp+ -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one.

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >