Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 476/609 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • How to convert a raw disk image to a copy-on-write image based on another image for use with kvm and

    - by Jean-Paul Calderone
    I have a virtual Windows machine running on kvm. Presently it has a 90GB raw disk image. I would like to clone this VM without having to keep two copies of the 90GB raw disk image around. It seems like a good approach for doing this is to make two new qcow or qcow2 images based on the original. First I converted the raw image to a qcow2 image: qemu-img convert -O qcow2 basewindowsxp.img basewindowsxp.qcow2 Then I tried creating a new image backed by this: qemu-img create -F qcow2 -f qcow2 -b `pwd`/basewindowsxp.qcow2 windowsxp-1.qcow2 Then I used virt-manager to point the original VM at windowsxp-1.qcow2. However, when I try to start up the VM in this new configuration, virt-manager reports an error: Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/engine.py", line 588, in run_domain vm.startup() File "/usr/share/virt-manager/virtManager/domain.py", line 150, in startup self._backend.create() File "/usr/lib/python2.6/dist-packages/libvirt.py", line 300, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: internal error unable to start guest: qemu: could not open disk image /var/lib/libvirt/images/windowsxp-1.qcow2 The error suggests that the filename was misspecified or that the filesystem permissions are too restrictive, but neither of these is the case: $ ls -l /var/lib/libvirt/images/windowsxp-1.qcow2 -rwxrwxrwx 1 root root 262144 2010-05-27 08:32 /var/lib/libvirt/images/windowsxp-1.qcow2 Why won't virt-manager start this vm?

    Read the article

  • Cyrus: In practical terms, how do end users administer their shared mailboxes?

    - by Nick
    Let's say we have four customer service reps: Billy, Bob, Joe, and Tom. Tom is the department manager. There's a shared Customer Service mailbox on the Cyrus server that they all have access to. Tom, as the manager also has administrative privileges for the shared mailbox. They decide they want to create sub-folders a certain way, and Tom creates them. They're all running Thunderbird, so Tom right-clicks the main folder and chooses "New Subfolder". Now Tom has the Subfolders he needs and the other sales reps have... nothing! Because Cyrus created the Subfolders giving Tom "Full Access" permissions, and everyone else gets no access. So how does Tom give the other reps in his department access to the new folders? As far as Cyrus is concerned, Tom has permission to grant others access to his new mailboxes- But as far as I can tell, there's no option in Thunderbird for granting mailbox permissions. An IT staff member should not have to receive a support request every time someone wants to add a Subfolder to a shared mailbox. That's why we make certain users into mailbox admins in the first place! But asking (non-technical) users to SSH into an IMAP server to run cyradm seems like a bad idea too. Certainly someone has found a solution for this dilemma. Perhaps a Thunderbird extension for setting Cyrus permissions? Or something like umask that forces subfolders to have identical permissions to their parents on creation? And related, what about Sieve configuration? Is there anyway that can be done from the client machine too? Thanks, Nick

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • There's no sound on Ubuntu with an Intel HDA onboard chip and Realtek ALC1200 codec.

    - by Hanno Fietz
    For a while now, my sound has not been working in Ubuntu. It used to play OK, but after some upgrade (might have been distro upgrade to 9.10), it stopped working. I'm currently running 10.04 on an amd64 architecture. I'm using the builtin audio on a Foxconn motherboard, it's an ATI / Intel HDA chip with an Azalia controller, apparently it's using the Realtek ALC1200 codec. All the gory details here. I found a nice sound troubleshooting tutorial here, which is well-written and pretty extensive, however, I fail to look up the supported "models" for my soundcard. The troubleshooting page says to look for a section giving the codec used by your soundcard, which looks like this for me: !!HDA-Intel Codec information !!--------------------------- --startcollapse-- Codec: Realtek ALC1200 Then, I'm supposed to lookup the models for that codec in the file Documentation/ALSA-Configuration.txt in the appropriate directory of ALSA's git repository. Mine actually pointed me to a separate file, Documentation/HD-Audio-Models.txt, which, for my driver version is located here and contains no section related to ALC1200 codecs. I tried putting the driver options probe-mask=1 and model=auto in a config file for modprobe, as suggested elsewhere, but this just lead to snd-hda-intel not able to load at all anymore. I also tried installing the linux-backports-modules-alsa package for my kernel, because the description sounded promising, but that didn't change anything, either.

    Read the article

  • Need for explanation: NetBIOS over TCP/IP on VMware network adapter disturbs access to network share

    - by gyrolf
    (Moved here from StackOverflow) Some time ago nearly all workstations in our team (Windows XP SP2) exhibited intermittend but frequent delays when accessing shares on the network. Typically the first access to a share which hadn't been accessed for some time resulted in a nearly frozen workstation for up to 30 seconds. Then everything started working fine again. Using TCPView from Sysinternals I saw that during this delays there was a connection to the netbios-ssn port on the file server which was in state SYN_SENT. First try: Disable NetBIOS over TCP/IP for the intranet network adapter. Problem solved, but I didn't like to manipulate our centrally managed network configuration for the intranet. Second try: Disable NetBIOS over TCP/IP only for the VMWare network adapter (VMNet1 used for host only communications). Problem solved again! My questions: Why does NetBIOS over TCP/IP on one network adapter disturb NetBIOS over TCP/IP on another network adapter? Is this problem specific to VMWare network adapters? Has anybody else seen this phenomen? Additional information: VMWare Workstation version 6.0.3 At the time I started seriously analysing the problem it was no more possible to find out what had been changed to our systems at the time the problems started.

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • IIS6: How to troubleshoot a 404 error in an ASP.NET application?

    - by Tomalak
    I have an ASP.NET application on a Windows Server 2003/IIS6 that refuses to run for some reason (it's the Xerox Centre, if that info helps). It has been working flawlessly before though on this server. Now, all I get if I try to open the app homepage (http://some.intranet.server/XeroxCentreWareWeb/) is a "404 - File or directory not found" error. The app is configured to run in it's own app pool, which runs as Network Service. The Network Service account has read access to the configured directory. If I stop the app pool, I get the expected "Service Unavailable" message, meaning the app and its pool are wired correctly I tried to track down any file permission issues with procmon - nothing to be seen. There isn't even an access to the web app directory happening when the page loads. Interestingly, according to procmon, the web server accesses the 401-2 custom error file (Logon failed due to server configuration) first, but then decides to send the 404 down to the client. EDIT: The app runs with Windows-integrated authentication. Regular users have access to the app directory as well (I would have noticed file system "ACCESS DENIED" messages in procmon, if there had been any.) This makes me think that there is some kind of weird permission problem that occurs even before the application files are being accessed. I just have no idea where to look. I've tried to run the app pool as Local System for a test, but to no avail. What else could I check in this case?

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • Debian, Apache2, CGI: paths issue

    - by Bubnoff
    I have a perl form email script on the servers cgi-bin directory ( /usr/lib/cgi-bin ). /etc/apache2/sites-enabled/000-default ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> The issue is with paths. html calls script here: <form name="Request" method="post" action="http://server-test.local/cgi-bin/formprocessorpro.pl" onsubmit="return checkWholeForm49874(this)"> The directory with the templates and configs is passed here: <input type="hidden" name="base_path" value="../contact" /> The path to this form is: http://server-test.local/formstest/contact.htm No matter what variation I try for the base_path I get an error from the formprocessor script that it can't find the directory: An error occurred when opening the Form Configuration File (../contact/form.cfg): No such file or directory. I need to move this script from an old server, configured by a previous sysadmin, to a new server. Since cgi-bin is automatically linked to /usr/lib/cgi-bin and linked such that the script resides: http://server-test.local/cgi-bin/formprocessorpro.pl I would imagine that, given that the templates are in the webroot in a directory called contact, the correct path would be: ../contact Any ideas? It's been awhile since I've messed with CGI. Bubnoff

    Read the article

  • Network Misconfiguration when adding first host to new vSphere cluster

    - by dunxd
    I am building a new vSphere cluster from scratch. I have installed ESXi on the first host, and built a vCenter server on a VM residing on that host (storage is on the local hard drive, although we have iSCSI targets which I can reach from the host). The cluster is configured for HA. When I try and add the host to the cluster, I get an error at the point where HA is configured - Cannot complete the . I have stripped the network configuration of the host down to the most basic - a single NIC attached to a single vSwitch - this is running the VMKernel Port on VLAN 8 - that is our Management VLAN. The vCenter server will have a network address on this VLAN, so I also set the initial Virtual Machine Port Group to this VLAN, and connected the vCenter server NIC to this port group. I understand I can't connect the vCenter server to the VMkernel port group, but shouldn't I be able to connect the vCenter server to a Port Group in the same VLAN? If not, do I need to create a VLAN specifically for VMKernel Port Group? I plan to set up another port group for vMotion with a dedicated and isolated VLAN (i.e. VLAN isn't routed) so this wouldn't allow vCenter to communicate. Does anyone have any suggestions, or other ideas for what might be causing the problem. I've read through the documentation, but it isn't giving me any pointers, and the error message isn't helping me beyond telling me something is wrong with my network config.

    Read the article

  • Expanding to dual video cards

    - by Anthony Greco
    I know a lot of factors can go into play here, so I will list my current hardware and setup: MOBO: GIGABYTE GA-890FXA-UD5 [http://www.newegg.com/Product/Product.aspx?Item=N82E16813128441] Processor: AMD Phenom II X6 1090T Black Edition Thuban 3.2GHz [http://www.newegg.com/Product/Product.aspx?Item=N82E16819103849] Ram: G.SKILL Ripjaws Series 16GB (4 x 4GB) [https://secure.newegg.com/NewMyAccount/OrderHistory.aspx?RandomID=4933910872745320111128011418] Current video card: EVGA 01G-P3-1366-TR GeForce GTX 460 SE [http://www.newegg.com/Product/Product.aspx?Item=N82E16814130591] OS: Windows 7 Ultimate x64 Currently I can run 2 monitors just fine in my setup. However, I want to upgrade this to 4 monitors. My question is, what is the best way to do this? I remember in the past reading I need the same type of video card, however would any GeForce GTX work, or do i need that very specific model (EVGA 01G-P3-1366-TR GeForce GTX 460 SE)? Are there any issues I should be aware of before I order 2 new monitors and a video card? Are there video cards better setup for this? I know NVidia offers SLI, however I do not know if my mobo is compliant. My mobo also offers CrossFireX configuration, though from what it says only Radeon are compliant. Any suggestions / feedbacks on my best route with my current setup is appreciated. Even if you suggest buying 2 new identical video cards, as long as you can mention which and why that is better I really appreciate it. Note: I really do not do any gaming. I sometimes do some 3D work in Unity and very rarely in Maya. Besides that I mostly do all my computer work in Visual Studios and Photoshop. I however need the 2 extra monitors because I monitor sometimes 5 remote desktops at once and switching on only 2 is becoming a very big pain. Also seeing 3 side by side while I work on the 4th will be very helpful. Again, I appreciate any feedback, as I have googled a bunch and just want to make sure what I buy will work.

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • How to subnet hosted VMs

    - by bwizzy
    I have a network of VMs each having a LAN IP address and a public IP address. They each have a 1:1 NAT map for public access via the public IP for HTTP, SSH etc. I'm trying to figure out a way to restrict the LAN IPs from talking to each other, but there are some cases where a group of LAN IPs will need to communicate. I'm using pfSense as a firewall / router on a 192.168.0.0/24 configuration. It seems like I could assign each VM it's own subnet and add a static route to the firewall for that VM to get back to the firewall for internet access / other fw rules. Is that right? I assigned 1 VM with: address 192.168.1.2 netmask 255.255.255.254 gateway 192.168.1.1 Then added a static route on the FW's LAN interface using 192.168.1.0/30 as the destination network and 192.168.1.1 as the gateway. Nothing appears to be working, anyone have any ideas? Please be aware I'm not that familiar with subnets. Thanks!

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • How do I remove a USB drive's write protection?

    - by nate
    I have a SanDisk Cruser Blade USB stick that suddenly seems to be write protected. I tried running DiskPart but after I write the command "attributes disk clear readonly" it displays this: Microsoft DiskPart version 5.1.3565 ADD - Add a mirror to a simple volume. ACTIVE - Marks the current basic partition as an active boot partition. ASSIGN - Assign a drive letter or mount point to the selected volume. BREAK - Break a mirror set. CLEAN - Clear the configuration information, or all information, off the disk. CONVERT - Converts between different disk formats. CREATE - Create a volume or partition. DELETE - Delete an object. DETAIL - Provide details about an object. EXIT - Exit DiskPart EXTEND - Extend a volume. HELP - Prints a list of commands. IMPORT - Imports a disk group. LIST - Prints out a list of objects. INACTIVE - Marks the current basic partition as an inactive partition. ONLINE - Online a disk that is currently marked as offline. REM - Does nothing. Used to comment scripts. REMOVE - Remove a drive letter or mount point assignment. REPAIR - Repair a RAID-5 volume. RESCAN - Rescan the computer looking for disks and volumes. RETAIN - Place a retainer partition under a simple volume. SELECT - Move the focus to an object. It's like when you type help at the DiskPart prompt, so how do I get past this? This problem started when I plugged the stick into a laptop which had viruses, if that's any help.

    Read the article

  • With no password expire notification at logon in Windows 7, how are you configuring password expire

    - by J. L.
    To my understanding, Windows 7 users do not receive password expiration notification during the logon process - it occurs strictly from the system tray. We currently have tray balloon notifications disabled to lessen user distraction, and I expect the password change process is a smoother one during the logon process rather than in an existing session. As a result, users will get prompted to change their passwords at expiration. The users also connect to Terminal Services boxes, but receive the advanced notification for password expiration there. So, Windows 7 is not notifying, but TS/RDS and XP boxes are. Any guidance on configuring this? Personally, I would turn off all expiration notices, but I understand most users would prefer to see the notification. Thoughts? Any GPO or other settings I might be overlooking? The interactive logon setting below is already enabled for our Win7 workstation GPO. My thought is balloon notifications will get turned back on for Windows 7, but I wanted to see if anyone was aware of alternatives. Thanks. Computer Configuration\Windows Settings\Security Settings\Local Policies - Security Options Interactive logon: Prompt user to change password before expiration

    Read the article

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

  • Can't make nodejs mingw32: pkg-config can't find gnutils

    - by valya
    I'm trying to compile nodejs using MSYS, mingw32 on Windows 7-64 Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ ./configure Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program LINK : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LINK.exe Checking for program LIB : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LIB.exe Checking for program MT : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\MT.exe Checking for program RC : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\RC.exe Checking for msvc : ok Checking for msvc : ok Checking for library dl : not found Checking for library execinfo : not found Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for library pthread : not found Checking for function pthread_create : not found error: the configuration failed (see 'C:\\msys\\1.0\\home\\Valentin_Golev\\node js\\build\\config.log') I have gnutils built and installed! I've checked the config.log, and there was a command: pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls I typed it in the console Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls Package gnutls was not found in the pkg-config search path. Perhaps you should add the directory containing `gnutls.pc' to the PKG_CONFIG_PATH environment variable No package 'gnutls' found But, Valentin Golev@VALYASNOTEBOOK ~ $ $PKG_CONFIG_PATH sh: c:/msys/1.0/local/lib/pkgconfig: is a directory Valentin Golev@VALYASNOTEBOOK ~ $ cd $PKG_CONFIG_PATH Valentin Golev@VALYASNOTEBOOK /local/lib/pkgconfig $ ls gnutls-extra.pc gnutls.pc What am I doing wrong?

    Read the article

  • Xen domU passwd file overwritten with console log output

    - by malfy
    I was setting up a Debian Xen domU and after booting it fine, I added basic configuration to /etc/network/interfaces and ran /etc/init.d/networking restart. This failed so I decided to reboot. After the reboot I also ran xm shutdown box. When dropped to a shell prompt it wouldn't let me login. Upon further inspection, I now have garbage in some critical files in /etc: root@box:/# tail +1 mnt/etc/{passwd-,shadow} tail: cannot open `+1' for reading: No such file or directory ==> mnt/etc/passwd- <== 0000000000100000 (reserved) Nov 23 02:02:39 box kernel: [ 0.000000] Xen: 0000000000100000 - 0000000004000000 (usable) Nov 23 02:02:39 box kernel: [ 0.000000] DMI not present or invalid. Nov 23 02:02:39 box kernel: [ 0.000000] last_pfn = 0x4000 max_arch_pfn = 0x1000000 Nov 23 02:02:39 box kernel: [ 0.000000] initial memory mapped : 0 - 033ff000 Nov 23 02:02:39 box kernel: [ 0.000000] init_memory_mapping: 0000000000000000-0000000004000000 Nov 23 02:02:39 box kernel: [ 0.000000] NX (Execute Disable) protection: active Nov 23 02:02:39 box kernel: [ 0.000000] 0000000000 - 0004000000 page 4k Nov 23 02:02:39 box kernel: [ 0.000000] kernel direct mapping tables up to 4000000 @ 7000-2c000 Nov 23 02:02:3 ==> mnt/etc/shadow <== 32 nr_cpumask_bits:32 nr_cpu_ids:1 nr_node_ids:1 Nov 23 02:02:39 box kernel: [ 0.000000] PERCPU: Embedded 15 pages/cpu @c15b0000 s37688 r0 d23752 u65536 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: s37688 r0 d23752 u65536 alloc=16*4096 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: [0] 0 Nov 23 02:02:39 box kernel: [ 0.000000] Xen: using vcpu_info placement Nov 23 02:02:39 box kernel: [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16160 Nov 23 02:02:39 box kernel: [ 0.000000] Kernel command line: root=/dev/mapper/xen-guest_root ro quiet root=/dev/xvda1 ro Nov 23 02:02:39 box kernel: [ 0.000000] PID hash table entries: The garbage is also present in the passwd file and the group file (although I didn't paste that above since I have since ran debootstrap on the filesystem again). Does anyone have any insight into what happened and why?

    Read the article

  • Multiple IP's using one NIC connectivity problem - Windows

    - by Vincent
    I have a frame relay network that is directly connected to a GPRS network. I also have a ADSL high speed network and recently I have been trying to achieve the following network configuration using windows 7 (Also tried XP) with no success to date. On one server I have two NIC's NIC1 I would like the following two static IP address's 10.0.1.110 and 10.0.1.200 the cisco router has a default gateway of 10.0.1.1 the ADSL is DHCP. NIC1 and the cisco router do not have access to the internet. NIC2 is setup for DHCP with a primary DNS and secondary DNS configured to enable internet connectivity. With NIC1 all incoming TCP connections are from IP address's starting with 10.192.x.x I cannot establish a TCP connection to both 10.0.1.110 and 10.0.1.200. Its either one or the other. I have a static route implemented in windows of: route -p 10.192.0.0 mask 255.255.0.0 10.0.1.1 metric 1 I have tried leaving out the gateway in the NIC1 and many other combinations with no success. Can anyone please help? What am I doing wrong?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >