Search Results

Search found 10285 results on 412 pages for 'enterprise repository'.

Page 354/412 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • TortoiseSVN client slows Explorer to a crawl in Windows XP running in Parallels

    - by Cory Larson
    I thought I'd make my first SuperUser question relatively simple, though it's the kind of question that may not get many responses as I'm not directly involved with the issue. A colleague does his development in Windows XP running in Parallels on his Mac. We've just migrated our VSS repository to SVN, and we've gone with TortoiseSVN as our client of choice with the Ankhsvn plugin for Visual Studio. On his XP instance, after installing TortoiseSVN, browsing through folders using Explorer is extremely slow; about 15 - 30 seconds before the contents of the next folder displays. It's the slowest when opening My Computer. Once he reaches a folder that contains the working content of an SVN project, Explorer behaves quickly again as expected. It seems that TortoiseSVN may be spending a bunch of time searching subfolders for stuff so it can do its icon-overlay thing, but that's just a guess. I've used TortoiseSVN for years on both XP and Vista on far less powerful machines without any issues with Explorer, so I'm attributing the slowness to it being run in a VM, though that may not be the actual issue. So has anyone encountered similar performance issues, and/or know of a fix? Keep in mind that any requests to make changes to his configuration will need to be communicated and thus my response time might be slow. Thanks everyone!

    Read the article

  • How do you apply proxy settings per computer instead of per user?

    - by Oliver Salzburg
    So far, I've used a user group policy object utilizing Internet Explorer maintenance to set a proxy for the user in IE. We have now deployed the Enterprise Client (EC) starter group policy to our domain and this policy affects this behavior. The EC group policy uses the policy Make proxy settings per-machine (rather than per-user). This policy describes itself as: This policy is intended to ensure that proxy settings apply uniformly to the same computer and do not vary from user to user. Great! This seems to be an improvement over my previous setup. If you enable this policy, users cannot set user-specific proxy settings. They must use the zones created for all users of the computer. What zones and where do I configure the proxy settings for them? I assumed the policy would simply take the user settings and apply them, but that's not what's happening. Now no proxy server is set at all. So my previous settings obviously no longer have any effect. So far, I've only come up with solutions that involved direct manipulation of the Windows registry. Which is fine, I guess, but the way the proxy is configured for users makes it appear as if there could be a higher level approach.

    Read the article

  • IIS6 site using integrated authentication (NTLM) fails when accessed with Win7 / IE8

    - by Ciove
    Hi, I'm having pretty similar problems as described in case 139099, but the fix there doesn't seem to work for me. Here's the details: Server: Win2003Srv R2 SP2 (stadalone, not a member of a domain). IIS6, TCP/443 (https). Anonymous access disabled. Integrated Windows authentication enabled. Local useraccouts Each useraccount has own virtual folder with change access and read access to site root. The 'adsutil NTAuthenticationProviders "NTLM"' -thing set to site root and useraccount's virtual folder. Client: Win7 Enterprise Member of a AD-Domain IE8 Allows three login attepts then fails. Using [webservername][username] in the logon window (Windows security) Logon using other browsers (Chrome and Firefox) works OK. The Web services log shows one 401.2 and two 401.1 events. The Security Event log shows two events, first is Fauilure Audit (680), The second event is Fauilure Audit (529) with these details: Logon Failure: Reason: Unknown user name or bad password User Name: [username] Domain: [webservername] Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: [MyWorkstation] Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: [999.999.999.999] Source Port: 20089 Any ideas appreciated.

    Read the article

  • RAID-capable 3.5" SATA Drives

    - by nroam
    I recently purchased a pair of 1TB Western Digital WD1002FBYS RE3 drives for use in an external RAID enclosure. I have found that they tend to drop out of the array after a while. Thinking it was the enclosure I tried them on another one but found the same issue. So a bit of googling and I found http://www.tomshardware.com/forum/251076-32-raid-issues-western-digital-hard-disk which suggests that: "WD's "RE" (RAID Edition) HDDs support Time-Limited Error Recovery ("TLER" ): http://www.wdc.com/en/products/productcatalog.asp?language=en As a non-TLER HDD fills up with data, the error detection firmware might take too long, and the RAID controller may drop that HDD from a RAID array." So now I wonder what SATA drives have firmware which is compatible with RAID arrays (esp. RAID 1, 5, but not 0)? I have not been able to come up with the magic set of keywords to ellicit the answer from Google. However, various sites suggest that Seagate & Hitachi are in general OK. Does anyone have any generic (or even specific) guidance on how to work out if a drive's firmware may harbour code that is potentially an issue in a RAID0 setting other than stating that it must be 'enterprise' ready?

    Read the article

  • SRM 4 Test Fails with message for some VM : Error: A specified parameter was not correct.

    - by Setesh
    Here are my architecture : For the protected site 4 Host VSphere Enterprise Plus, each one with 2 HBAs FC connected to the switch fabric, connected to an EMC CX4-120 1 VCenter 1 SRM For the recovery site 2 Hosts Vsphere 4 1 Vcenter 1 SRM 1 CX-4-120 The CX4-120 is connected to the second CX4-120 with ISCSI and the MirrorView / Asynchronous. I synchronise for the time 6 Lun on a FC DAE, 2 on a S-ATA DAE I have allocated 30% of the amount synchronised LUN for the SNAPSHOT us, but I have allocated them only on my S-ATA II DAE. It does not make a problem, my snapshot are correctly active. All the installation is new (hardware and software), installed in January with the last files available in download. I have a strange problem, and it's random, sometimes when I run a test on my RP, some VMs have this error : Error: A specified parameter was not correct. I don't know where to look. Any help is appreciated... PS : I have checked on all the VMs, no Floppy disk or CD attached. PS2 : There is severals VMs with RDM and OCFS2 filesystems on it.

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

  • Active Directory FRS problems. 13508 error and other problems

    - by user59232
    I have 3 Domain Controllers. We will call them DC1, DC2 and DC3. DC3 and DC2 show Event ID 13508 in their FRS logs with no follow-up event(13509 I think) to say the error had been fixed. DC1's FRS log no matter what you do never shows any events besides FRS service stopped and started. DC1 holds the SYSVOL that needs to be replicated to the other DC's. The other DC's sysvol folders are empty. I have tried the burflag method of fixing this but I haven't had any luck. My procedure for that was to stop all FRS services on all DC's. Then set the burflag on DC1 to D4 and the other two DCs burflag to D2. Started FRS on DC1 and the only event's I see in DC1's FRS event logs are service stopped and service started messages. This fact is leading me to believe that something is wrong on FRS for DC1. I believe there should be events 13553 and 13516 in the FRS event logs after an authoritative sysvol restore. The other two DC's do not have anything in their SYSVOL, otherwise I would have made one of them the authoritative sysvol. DC1 is MS Server 2003 Enterprise Edition SP2 DC2 is MS Server 2003 Standard Edition SP1 DC3 is MS Server 2003 R2 Standard Edition SP2 I did not setup this domain originally but I am now the administrator of it, so I don't have a lot of background on why certain things may have been done in the past. My main goal is to try and fix these issues to get myself better prepared to decommision DC1 and add a DC running Server 2008 to my domain. Thanks.

    Read the article

  • User profile service fails

    - by s.r.a
    I have Windows 7 and 3 drives on my HDD. The second drive is D:\, and there are some files in that. I decided to install 8.1 Enterprise so I installed it in dual boot manner beside 7 and in D:\ drive which as I said was not empty and when installing 8.1, I didn't format the D:. I installed 8.1 successfully in D:\ and it was working fine. One time which I came up with 7, I thought I should arrange the 8.1 folders in D: to be separated from the other non-8.1 folders, so I created a new folder named it "Windows 8.1" and cut all 8.1 folders and pasted them into that new folder. Now my D: drive was arranged. When I restart the PC, I selected the 8.1 to start with, but it didn't come up like before and instead, it shows now a blue screen (not the blue screen of death!) and the time is shown in left-down corner of it. When I click the screen this message appears: The User Profile Service service Failed the sign-in. User Profile can not be loaded. I know two things: 1- The problem is to do with that cutting and pasting the 8.1 folders to be arranged. And 2- If I reinstall the 8.1, the problem will be solved (but if I don't do that cutting and pasting again!) Is there any simpler way to solve the issue and have the two OSs with each other?

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Vmware player change dhcp server settings

    - by Tathagata
    I have a Windows Server 2003 running from a Vmware player on Win 7 box. The idea is to test Windows Deployment service in the virtual network. Is it possible to configure the vmware dhcp server with WDS related stuff(option 66, 67)? I found a few references where people were using the vnetlib.exe to start, stop the dhcp serverchange the subnet mask etc - but there's no info on how to get set the dhcp server options. DHCP config from the virtual network editor I do have the Workstation, without the license for it. In the Virtual network Editor, the DHCP settings for the network I'm using, only allows me to set the subnetmask, IP ranges and stuff like that. But not the dhcp options. DHCP server on the WDS server Authorizing the DHCP server in the guest WDS server fails. The VMware player can run its own dhcp server fro the virtual network with out any authorization from the Active directory - can I do the same, with Win dhcp server in the guest Win Server? ~~~~~ Can I authorize W2K8 DHCP server for private network, even when prohibited in enterprise network? says we have to run a third party dhcp server... :/.

    Read the article

  • How to show what will be updated next pull?

    - by ???
    In SVN, doing svn update will show a list of full paths with a status prefix: $ svn update M foo/bar U another/bar Revision 123 I need to get this update list to do some post-process work. After I have transferred the SVN repository to Git, I can't find a way to get the update list: $ git pull Updating 9607ca4..61584c3 Fast-forward .gitignore | 1 + uni/.gitignore | 1 + uni/package/cooldeb/.gitignore | 1 + uni/package/cooldeb/Makefile.am | 2 +- uni/package/cooldeb/VERSION.av | 10 +- uni/package/cooldeb/cideb | 10 +- uni/package/cooldeb/cooldeb.sh | 2 +- uni/package/cooldeb/newdeb | 53 +++- ...update-and-deb-redist => update-and-deb-redist} | 5 +- uni/utils/2tree/{list2tree => 2tree} | 12 +- uni/utils/2tree/ChangeLog | 4 +- uni/utils/2tree/Makefile.am | 2 +- I can translate the Git pull status list to SVN's format: M .gitignore M uni/.gitignore M uni/package/cooldeb/.gitignore M uni/package/cooldeb/Makefile.am M uni/package/cooldeb/VERSION.av M uni/package/cooldeb/cideb M uni/package/cooldeb/cooldeb.sh M uni/package/cooldeb/newdeb M ...update-and-deb-redist => update-and-deb-redist} M uni/utils/2tree/{list2tree => 2tree} M uni/utils/2tree/ChangeLog M uni/utils/2tree/Makefile.am However, some entries having long path names are abbreviated, such as uni/package/cooldeb/update-and-deb-redist is abbreviated to ...update-and-deb-redist. I deem I can do with Git directly, maybe I can configure git pull's output in special format. Any idea?

    Read the article

  • moving files and directories between two machine, via a third, preserving permissions and usernames

    - by Jarmund
    The situation is as follows: Machine A has a file repository accessible via rsync Machine B needs the above mentioned files with all permissions and ownerships intact (including groups etc) Machine C has access to both A and B, but has a completely different set of users. Normally, i would just rsync everything over, directly between A and B, but due to severely limited bandwidth at the moment, i need something different, as rsync times out after building the list of the 430 files (49Mb uncompressed... can be compressed down to ~7Mb). What i've tried so far: rsync everything over from A to C, tar it, copy the tarball over, and then untar it, however, this messes up the ownership and/or the permissions. To rsync it from A to C, i run this command: rsync --numeric-ids --password-file=/root/rsync_pwd_file -oaPvu rsync://[email protected]/portal_2/ ./portal_2/ ...and from the looks of things, they do end up on C with the correct ownerships/permissions/flags/everything (not 100% sure, though.. are there any more switches i can throw in there? did i miss something?) copying the tarball over is simple enough (slow as a one-legged turtle due to the bandwidth, but it checksums out alright) What i'm unsure of is the flags and switches for creating and extracting the tarball, so could someone please provide the full commands for creating a tarball from /root/portal_2 on machine C (with everything intact) and extracting the tarball into /var/ex/portal_2 on machine B? ? Also, are there any other approaches worth mentioning that could allow me to perform this? I have root access to A and C, whereas i only have rsync access to B. PS: I'm running rsync v2.6.9 on machine B, and unfortunately i do not have the oportunity to upgrade to v3

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration using the Ubuntu Enterprise Cloud Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....?

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • How do I setup a proper VPN for my friends to play LAN games AND give them internet access?

    - by Gizmo
    I'm trying to setup a VPN on my local network, but everyone who connects to me DOES have access to my laptop but not to the internet or other devices on the network. How can I properly configure my VPN on windows to work correctly (giving internet + access to all devices on my network to the remote pc)? Or is there software on windows which makes creating a VPN server easier? or maybe a VMWare image linux vpn server? I can't find any of those! My requirement is that my friends don't have to install additional software, they have to be able to connect with default windows stuff. My OS is Windows 8 Standart edition (not pro or enterprise) OEM. Most of my friends have also windows 8, some windows 7. Extra info: My device is DMZ'ed (Demilitarized Zone, [disabled NAT on my device so it's accessible on the WAN]) I can access files, websites and services on other devices on my network, and all devices can access file shares, website and all other services on my device When enabling VPN everything works except the client is unable to get internet access or access to any device on my network, client has only access to my device.

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • SQL Server 2008 services error on account

    - by TheDude
    I installed SQL Server Enterprise, but can't get it to work. It is a stand alone, on a laptop for development purposes. No network is involved, no other users. The OS is windows 7. Now, I keep receiving eventId 7000, which means that access is denied for the user (the user was Network Services). So, after reading up on it, I kind of got the idea that a user account should be created with minimal privileges. So, off I went and added a user, SQLservices. In the SQL Server Configuration Manager I right clicked SQL Server(MSSQLSERVER), and in the properties I added my new user. Well, here's mister eventId 7000 again. I don't get what I am doing wrong. Also, this new user ends up on my start-up screen. I don't think I want that... I mean, it would be weird to have x number of users crowding up my start-up screen just because I created those for my windows services... The error I get when I add the user in SQL Server Configuration Manager is as follows: Permission Denied. [0x80070005] Helps!

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . EDIT : 16/3/2010 After Richard Holloways reply I thought it relevant to add this info: The reasons why I want to do this are to explore maximising security on any server install and set up, due to interest in the area of Computer Security and Forensics. Also I am trying to peform the task as if it being performed in an enterprise situation. On a technical matter, once set up and configured with minimal packages and ssh this server will not physically be easy to access so I will only be entering via ssh. (Yes I know why encrypt something no one will ever be able to get their hands on? Because I can and I want to is the simple answer, but see above too).

    Read the article

  • Windows 7 using llt for ipv6

    - by Seoman
    The question asked below is based on the specific implementations of the Os not the RFC. Looking on a way to be able to assign a fixed ip address to a host, before it boots I found that Centos 6 works fine with no modifications and Windows 7 does not work at all. As defined in enter link description here exists 3 valid ways of generate a DUID: 1 Link-layer address plus time 2 Vendor-assigned unique ID based on Enterprise Number 3 Link-layer address Looking at the centos, that works fine, I can see the following autogenerated DUID: option dhcp6.client-id 0:1:0:1:19:60:25:f1:52:54:0:6b:b9:9e; and the MAC address for this host is: ifconfig eth1 | grep HWaddr eth1 Link encap:Ethernet HWaddr 52:54:00:6B:B9:9E As you can see, the DUID containts the MAC address. I can assign a fixed ip address to this host by including an entry on my dhcp server similar to: host vm { hardware ethernet 52:54:00:6B:B9:9E; fixed-address6 2001:db8:0:1::200; if packet(0,1) = 1 { log(debug,"VM Request match!"); } } And the Centos 6 gets his ip. On the windows side, I faced a common problem explained on this other link enter link description here As summary, Win7 uses the option 2 of the DUID generation or a variation of this one. On the link explains how to move it to a llt (link layer + time) but is not working fine. If I modify the DUID to one that looks like the one generated on Centos (but with the right MAC) it works as expected. Question 1 How Can I change the DUID generation for Windows 7 to be based on MAC as Centos 6 does? Thanks

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • Print job leaves queue but document isn't printed

    - by midnightstar
    I'm dealing with an HP Deskjet F380 All-in-One printer. It's connected via USB to a desktop running Windows 7 Enterprise x64. If I attempt to print something like a web page or a word document, the print job will show up in the print queue and the printer would stir. By stir, I mean, it would seem to prepare itself to print. However, the print job would then leave queue (I'm thinking the computer sees it as completed) and the printer would never actually print anything. However I went into Printers and Devices under the Windows start menu, into printer properties, and print a test page, the test page would print out successfully. I attempted to uninstall and re-install the printer drivers for the printer, but the printer would continue the same behavior afterwards. I also connected the printer to another computer and the printer will print just about anything. I also checked to make sure that the computer the printer needs to be connected to was update to date as far as the OS. The machine is fully up to date. I played with the way the computer handles printer spooling. Under the printer properties, under the "Advanced" tab, I had the print job print directly to the printer. In all these instances, the same behavior continues. I've restarted the printer spooling service. I've also gone under C:\Windows\System32\spool\PRINTERS and deleted files that were sitting in the folder. I have ran SFC /scannnow and the system found no errors in the system's integrity. I had the computer and printer make a cold reboot individually. The only lead I really have going for me is that since the printer prints on other PCs, I can only assume that there is something wrong with the way the PC is configured.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >