Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 763/1981 | < Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >

  • FDE / SSD - partition and leave some unencrypted?

    - by Web Design Hero
    Just bought a used beast of a desktop pc. The system drive is setup as a Raid 0 SSD (Intel 510 SSD Drives) with 128 each. I will probably not have to many programs beyond office and maybe Adobe CS if I spring for it, I will be keeping big data on a regular hdd. My question is about setting up TrueCrypt with my configuration. I have not previously done full disk encryption, but I feel that its probably a good idea. I have done some speed tests using file containers on the hdd and the sdd with truecrypt. While there is a huge hit with the SSDs and Truecrypt, it still outperforms the hdd on its own by a good margin, so I think i will be okay for my needs with truecrypt. I have seen in a few places that they recommend partitioning the drive and leavign some of the SSD not inside truecrypt, does this really make a difference? If so, how much should I leave? Will there be any issue in the Raid0 configuration? I am not really concerned about all the wear leveling issue, rather loose data and be secure, but since I don't need all that space neccesarily, I would like to optimize my setup for security and speed.

    Read the article

  • Passenger and ServerAlias not cooperating

    - by Pyzo
    I have a ruby application that runs on a server with multiple IP addresses and mutliple vhosts. Here is the configuration of the problematic virtual host: <VirtualHost 10.0.0.10:80> ServerName realname.example.com ServerAlias alias.example.com DocumentRoot /var/www/sites/example/current/public <Directory /var/www/sites/example/current/public> AllowOverride all Options -MultiViews </Directory> ErrorLog /var/log/httpd/example_error_log CustomLog /var/log/httpd/example_access_log common RailsEnv production RackEnv production </VirtualHost> When I pull up realname.example.com the Ruby on Rails application works correctly. On the other hand alias.example.com just gives me Not Found: / I'm fairly certain the correct vhost is getting used because alias.example.com produces a 404 in the correct log file. I've tried adding logging to the Passenger config and it seems to indicate that Passenger is getting the request. Note: I can't redirect alias.example.com to realname.example.com. realname is accessed using a CDN, whereas alias is directly accessed. Anyone have any ideas why this isn't working? I've been banging my head for days and I've got a similar configuration in QA that works as expected.

    Read the article

  • How to tell rsync not to touch the destination directory permissions?

    - by Sorin Sbarnea
    I am using rsync to sync a directory from a machine to another but I encountered the following problem: the destination directory permissions are altered. rsync -ahv defaults/ root@hostname:~/ The problem is that in this case the permissions and ownership of the defaults forlder will be assigned to the destination folder. I do want to keep the permissions for the files and subdirectories but not for the source directory itself. Also, I do not want to remove any existing files from the destination (but to update them if needed), but I think that current settings are already ok regarding this. How can I do this?

    Read the article

  • Host PC can't see file-system on local VMWare VM

    - by John
    I am running VMWare Workatation 8.0 on a Windows7 host, hosting a Windows7 VM. It's working fine except that the host cannot see the files on the VM when it's running. The VM is running locally on the host, not on the network. I found a page saying I simply need to share the folders on either side and the other can see them, but this isn't working. VMWare Workstation has a handy tool to share folders from the host drive to the VM, but I need the opposite - I have software installed on the host (for licensing reasons) which I need to use to work on files within the VM.

    Read the article

  • Ubuntu 12.04 can't boot after installing with software RAID 1

    - by Bill
    I've been trying to install Ubuntu with software RAID on my server and there is obviously something that I don't understand about the process. This is the guide that I followed: https://help.ubuntu.com/11.04/serverguide/advanced-installation.html I have two identical 1 TB disks in my server. I went through the initial install process and manually set up my partitions. On each disk I set up: (1) 100 MB partition for EFI boot (I didn't originally have this but added it based on a forum post I found after my original install failed to boot, I ended up with EFIboot since that was what the 'guided partitioning' decided to do) (1) 970 MB partition for / (1) 30 MB partition for swap I then created new RAID 1 disks combining the two partitions, one from each disk, such that each partition is mirrored. I then configured their usage as stated above. After saving the configuration I said yes to boot in a degraded state. The rest of the setup went normally, no errors of any kind. I saw GRUB being installed and again no errors. However, after rebooting the server I get the dreaded 'Insert boot media' and nothing happens. I loaded up the recovery disk and the mdadm configuration looks correct. md0 is my EFIBoot partition md1 is my \ partition using ext4 md2 is my swap partition Running file -s /dev/md0 doesn't indicate that GRUB is there and so I attempted to reinstall GRUB using the recovery disk. I selected the md0 disk and it appeared to install just fine. Running file -s /dev/md1 shows the error needs journal recovery, I'm not sure if that's related or not or how to fix that. Rebooting gives me the same problem, no boot media found. I've searched around the internet but can't figure out what to do next or more importantly how to troubleshoot what exactly is going wrong. Thanks!

    Read the article

  • Backup software for Ubuntu - which one?

    - by Industrial
    Hi everybody, I have spent some time testing out different backup solutions for my small home office during the last weeks, but still haven't found anything that have been working out too well yet. We can definitely work with a non-GUI script if that's what it takes, if only the requirements are fulfilled: Upload to Amazon S3 Europe. We get unbelievable slow uploading speed to US, so uploading 400+ GB of data will not be happening anytime this year... Incremental backups - only changed files shall be uploaded or we will have a big bill from Amazon in the end of each month.. Files should not be uploaded in one big per-folder archive. This is not efficient at all, since if we change one file in a subfolder, a huge two-digit GB sized file would have to be uploaded during next backup. Not good for economy again, or traffic overhead on our internet connection. What options are available to us? Thanks!

    Read the article

  • Is it possible to analyze the size of a SubVersion repository?

    - by BrianH
    Is it possible to know how much disk space each project in a SubVersion repository is using? I can check out a working copy of each project and look at the size each project takes up, but I don't think that encompasses the total size of the project (all revisions). I can look under the "db" directory of the repository, but none of the files in there make sense - I don't think it is possible to use them to figure out how much space each project occupies. I tried the svn ls --verbose command, but the size that it gives me is just the size of the actual files in the head revision, I don't think it includes all revisions. Maybe this isn't possible, but I thought I would ask. Thanks in advance!

    Read the article

  • Is there a tool for verifying the contents of a Zip archive against the source directory's contents?

    - by Basil
    Here's the scenario: I create a ZIP archive using some GUI package like WinZip, 7-Zip or whatever by right-clicking on a directory "somename" and selecting "Compress to archive 'somename.zip'" When the archive is completed, I open it and discover that some files don't exist in the archive (for reasons yet unknown). I want to find all files that are missing from the archive without having to extract the archive to another directory, then doing directory diff, etc. So.. Is there a tool (GUI or command-line, standalone or built into a compressor, for Windows or Linux, I don't care) that can walk through an archive and compare its contents against a directory on the filesystem?

    Read the article

  • Static DHCP binding

    - by Alex
    Good time of day, SF people. I have created a manual DHCP binding entry on a Cisco router so that a client would always get leased to it. The clients wants to get the same address on both of his dual-boot linux systems. He tries to get an IP address leased and he succeeds on one of the dual-boot operating systems. When he reboots to another one he gets a lease for a completely different one. I don't get it. The MAC addresses are the same (we checked in ifconfig, so what could be happening here? Why is the router confused? Or is it something else? Also, how can I check DHCP server IP address who I have got an IP address from (on Linux)? Configuration on Cisco: ip dhcp pool MANUAL_BINDING0001 host 192.168.0.64 255.255.255.0 hardware-address dead.beef.1337 dns-server 192.168.8.11 default-router 192.168.0.254 domain-name verynicedomainigothere.cn PS. Is it mandatory to use client-name configuration line?

    Read the article

  • How do I sync the Solution Explorer with the current File in Visual Studio?

    - by thepaulpage
    When I have an open code file in Visual Studio that I am editing I would like to keep that same file highlighted inside of the solution Explorer so that I know where I am at. What I'd really like is to change the focus to a different code file and the solution explorer to move to the file that I am editing. Further Explanation and example: I have a project with 2 files. Class1 and Class2. I open both files. The focus is on Class1. I click on the Class2 Tab, thereby changing the file that I am editing to Class2. Desired Behavior The solution explorer will highlight Class2.

    Read the article

  • Server 2008 R2 file access permissions

    - by Napster100
    I'm finding it awkward to sort out permissions for file sharing and access on my LAN. I've created an account on the server node (as a normal user) and shared a drive that has 2 folders at the root, one is for personal file storage and the other shared files, if I connect to the shared area from a workstation running windows 7 and log-in using the account I created on the server, I can look through directories but can't look in some (which I wanted as I changed the permissions for that to happen), but my problem is although the permissions are set for this user account to have full control of the specific folder I can't create a folder in that area or upload files to that folder. Could someone explain why this is? Thanks in advanced

    Read the article

  • Permissions issue on Fedora with separate home partition

    - by Tres
    I am running Fedora 12 and I've setup a partition separate from my root partition to keep shared files and home directories. Now, I've been having permission issues where it says the user cannot chdir into their home directory (/files/home/*). Now, I fixed this originally by chmodding / to 0755 and the home directories also to 0755. And yes, the user is the owner:group of their home directory. Now get this, I didn't change a thing, rebooted, everything still works. Great, right? I boot the server up a day later, and now same ol issue. This is a home server that wasn't on at all at any point in between the working state and non-working state. Also, nothing else was modified. Any ideas? Thanks!

    Read the article

  • One subdomain is not working

    - by BFTrick
    Hello there, My main domain works just fine - www.example.com and a subdomain set up by another developer works as well - sub1.example.com. But when I try to set another subdomain up I go through the process everything seems to work. The software creates the default files where the subdomain files should go. But when I try to browse there it doesn't work. My host uses Plesk to do all of the hosting stuff. What do you think the problem is? I doubt it is some sort of cache issue because I had problems on my phone which I tried after problems on the pc. Maybe for some reason Plesk needs time to set this up? I have used Cpanel before and that works instantly.

    Read the article

  • Can I store multiple private keys in one Putty PPK file?

    - by RedGrittyBrick
    I have multiple accounts on a server, say for example red webadmin testuser At the moment I have private keys for these in separate PPK files. I have shortcuts to these on the desktop - so clicking the desktop icon launches Pageant and prompts for a password. After doing this for each PPK file, I can log in and out of the server(s) multiple times during the day using various user-ids without entering passwords. So far so good. Could I streamline this further by somehow combining all these PPK files into a single PPK file? If so how?

    Read the article

  • How do I make stunnel verify a clients certificate?

    - by unixman83
    NOTE: The title is misleading. Please correct it if you know a better title. What I want to know is how do I create the SSL keys / certificates needed for this. Hi. I am using stunnel to authenticate RDP (Remote Desktop) and I need to verify that a client possesses the proper credentials. So people cannot brute force into the machine. I am also using a bad (outdated) version of RDP that has security vulnerabilities, so stunnel is a must. I will preshare the necessary .pem's between machines. What are the openssl commands I need to create the right .pem files on both the client and on the server? What files need to be shared?

    Read the article

  • Multi-site Drupal install with sites on different ports using Apache ip-based hosting?

    - by MattB
    In the past we've used name-based virtual hosting in Apache. We recently converted websites to SSL and had to go the ip-based route. As a result, we currently have an instance that is set up as follows: www.domain.com using port 80 dev.domain.com using port 8080 Both use the same IP. Is this scenario possible using Drupal multi-site functionality? While we find that dev.domain.com works and reads the correct "dev" database (using the dev settings), it reads theme files from the "www" site instead which is not what we want. Is the culprit the dev's htaccess file? Apache is listening on 8080 and does use the proper DB settings, but just not the correct theme files. One other note: browsing dev.domain.com:8080 gives an error: "The page isn't redirecting properly". Should we just purchase a new IP address for the dev website, or would this still not help? Any advice would be appreciated. Thanks.

    Read the article

  • Restoring VM's in Hyper-V from a (somewhat) failed VSS backup...

    - by DeliriumTremens
    I attempted to do a backup of our virtualization server when moving from Hyper-V 2008 to R2. I changed the registry settings to register Hyper-V VSS with Windows Server Backup, and sent the backup on it's way while I went on to other things. Apparently the VSS portion didn't back-up the VM's like I had hoped, and after updating Windows Server and Hyper-V to R2, I was unable to restore the VM's correctly. The backup completed and I have all of the files from before the update backed up, but is there any way to restore the VM's from the backup? The vhd's all seem to have old modified dates, and when bringing the machines up from these vhd's they have very old settings. I have found a bunch of avhd files (named according to GUID), but I'm not sure if I can create a VM with the old vhd and merge the snapshots with it.

    Read the article

  • Remove MySQL ibdata1 without dumping and restoring existing proper databases

    - by Halfgaar
    My MySQL server contains two 100+ GB big databases. One was created with innodb_file_per_table and one wasn't. The one that wasn't, has been dumped, ready to be reloaded. However, the ibdata1 file is still huge and I don't have enough free space. Normal advice in this situation is to dump and remove each database, stop MySQL, then remove ibdata1 and the transaction logs, and then reload the databases. My specific question is: can I leave databases that were created with innodb_file_per_table alone? Or will they be destroyed when I remove ibdata1, even though all their files are separate? I can't afford to take this database off-line to dump and reload it. And because it's already properly made with separate files per table, it would feel pretty useless.

    Read the article

  • Beaglebone Black running Debian, does device tree overlay act as an api?

    - by user3953989
    This maybe more of a Linux specific question but... I've been reading many tutorials and it seems that you can use JavaScript, Python, and C++ to write code for the Beaglebone Black(BBB). It looks like the way C++ interfaces with the BBB hardware is via reading/writing text files on the OS while Python has it's own library. All the C++ examples out there control the GPIO and PWM via reading/writing to text files. Is this the only way to access the hardware or just how Linux does drivers?

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Trying to mount an NFS directory from a Mac with another user

    - by Yair
    I have a username on an ubuntu server, lets call it user a. I want to mount a directory from that server to my Mac, on which I have another username, lets call it user b. My problem is that, after I mount the directory (using the disk utility app) I can view files on the server but can't modify or create new files on it. I checked, and if I change the permissions of the server directory so that its open to everyone (chmod 777), I can write to it. So what I need to know, is how can I specify the username and password in the NFS client when setting up the mount? That is, I want to specify that I'm trying to log in as user a to the server.

    Read the article

  • How to open a EAR/WAR file in windows 7

    - by Calm Storm
    I have a few EAR/WAR files which are Java archives and I would like Windows 7 to open these files the way it opens a file with extension zip. So I open this war file and in the list of softwares available with "Open" I see MS Word, Notepad etc but nothing about CompressedFolderView. I also tried manually specifying the location of exe (I thought this was expand.exe) but that does not work. Does someone know if I can make this work? Or should I use Winzip or some such utilities?

    Read the article

  • How do I delete Windows.old in Windows 7?

    - by CRK
    I used to run Windows 7 32 bit, but installed the 64 bit version because of a RAM upgrade. During the installation I got this message: The partition you selected might contain files from a previous Windows installation. If it does, these files and folders will be moved to a folder named Windows.old. You will be able to access the information in Windows.old, but you will not be able to use your previous version of Windows. The C:\ drive now has two (2) Windows folders: Windows (15.3 GB) Windows.old (15.7 GB) Don't see why I need the Windows.old taking over 15GB of space on my hard drive so I tried to delete it. It didn't work. How can I safely delete this folder?

    Read the article

  • about Linux read/write only permissions

    - by Bimal
    My question looks similar to another thread: Linux directory permissions read write but not delete Here, I want to create a directory where I can give the permissions like: A user can create/upload any files. A user can re-upload and overwrite the files. A user cannot remove the file anymore. I am on CentOS 5.5, basic user only. How can I do that? Or is there any third party software that can be installed to do this? Or, create a new process which will lock the permissions right after a new file is uploaded via ssh?

    Read the article

  • Finding documentation for /etc/**/*.dpkg-*

    - by intuited
    During upgrades, files with these extensions appear in /etc and its subdirectories. I gather that *.dpkg-dist contains the file that was distributed with the currently-installed version of a package, and *.dpkg-new contains the version from the version being installed, however I'd like to see the docs to be sure that I'm getting it right. Also there are occasionally other similarly named files, eg *.dpkg-original, and I'd like to be able to read up on these. I've checked /usr/share/doc/dpkg for documentation on this, and come up empty; there's no dpkg-doc package; Google doesn't have anything except unanswered questions. Can someone point me to the documentation for this aspect of debian package management?

    Read the article

< Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >