Search Results

Search found 12919 results on 517 pages for 'tool pack'.

Page 433/517 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • Best way to execute a command after Linux system halt

    - by Lukas Loesche
    Problem: The SSDs in our servers require a power cycle (i.e. off/on, not reset/warm reboot) after a firmware update. Thoughts: Using 'ipmitool chassis power cycle' I can cycle the server's power. However this would cut the power while the system is still running, filesystems are mounted, etc. What I basically want is a delayed power cycle so the system has a change to halt. But I guess that would have to be implemented on the server's IPMI board, so it's not really an option. My initial idea was to dynamically create a ramdisk containing the tool and libs and somehow integrate that into the halt process. I saw there's a /etc/init.d/halt, so that would be my starting point. Although I believe the kernel at some point in the shutdown process starts to kill off remaining processes. So I'm not even sure if that's a viable way. Question: What would be the best way to execute ipmitool (or any other command), after the system has halted and all regular filesystems are unmounted?

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • How to find the cause of locked user account in Windows AD domain

    - by Stephane
    After a recent incident with Outlook, I was wondering how I would most efficiently resolve the following problem: Assume a fairly typical small to medium sized AD infrastructure: several DCs, a number of internal servers and windows clients, several services using AD and LDAP for user authentication from within the DMZ (SMTP relay, VPN, Citrix, etc.) and several internal services all relying on AD for authentication (Exchange, SQL server, file and print servers, terminal services servers). You have full access to all systems but they are a bit too numerous (counting the clients) to check individually. Now assume that, for some unknown reason, one (or more) user account gets locked out due to password lockout policy every few minutes. What would be the best way to find the service/machine responsible for this ? Assuming the infrastructure is pure, standard Windows with no additional management tool and few changes from default is there any way the process of finding the cause of such lockout could be accelerated or improved ? What could be done to improve the resilient of the system against such an account lockout DOS ? Disabling account lockout is an obvious answer but then you run into the issue of users having way to easily exploitable passwords, even with complexity enforced.

    Read the article

  • Investigating a potential CPU failure

    - by Jernej
    On a Ubuntu server that I am using for computations I have recently observed that some CPU extensive programs (GUROBI,CPLEX) often segfault. Being in correspondence with tech support of the respective programs I was suggested that it may be a hardware issue. The administrator of the server performed a detailed memtest and it turned out that the RAM modules appear to be fine. Hence I've used the tool mprime to test the CPU and the following two lines appear multiple times durring the execution of the stress tests: [Worker #4 Oct 18 18:47] FATAL ERROR: Rounding was 0.498046875, expected less than 0.4 [Worker #4 Oct 18 18:47] Hardware failure detected, consult stress.txt file. The stress.txt file in itself is not very verbose about what could be the cause of this error so I would like to ask whether anyone here happens to know what could be the cause of this issue? Is there some other test I could perform to nail the problem even further? The temperature of the system (and all cores) was fine during the entire stress test (+69.0°C (high = +80.0°C, crit = +98.0°C)) the CPU in question is a Intel Core i7-2600K CPU @ 3.40GHz and is not overclocked or modified in any way. Also what is interesting that if I run mprime to only stress the CPU all tests pass fine. The error is only triggered when I let mprime stress the CPU+RAM.

    Read the article

  • Inconsistent DHCP replies with Windows 2008R2 DHCP server

    - by verbalicious
    I've got a Windows 2008R2 standard server running DHCP services. We've noticed that certain clients are receiving inconsistent DHCP replies. We have over 175 Windows workstations in this VLAN that don't seem to have trouble getting DHCP leases. However, PXE-booting clients trying to reach our DHCP server are able to get a lease inconsistently. Additionally, we tried using the "dhcping" tool against our DHCP server and found that roughly two of every three requests time out with "no answer" -- and this holds true when we set the timeout value on dhcping to 20seconds. After a failed attempt, however, we may get a dhcp lease reply immediately with dhcping. This leads me to believe that this issue isn't confined to PXE booting clients, but something more systemic with my LAN layer2 or DHCP. And that possibly my 175 windows clients are experiencing this in some form without my knowledge. We have over 30% of our scope available so the addresses are there. I was unable to find anything in the Windows server "DHCP-Server" log. Of course, my goal is to have my DHCP server reply to every request that it receives on the LAN!

    Read the article

  • Interactive console based CSV editor

    - by Penguin Nurse
    Although spreadsheet applications for editing CSV files on the console used to be one of the earliest killer applications for personal computers, only few of them and even less documentation about them is still actively maintained. After having done extensive search on the web, manpages and source code, I ended up with the following three applications that all have fundamental drawbacks: sc: abbrev. for spreadsheet calculator; nice tool with vi keybings, but it does not put strings containing the delimiter into quotas when exporting to delimiter separated format and can't import csv files correctly, i.e. all numbers are interpreted as strings GNU oleo: doesn't seem to be actively maintained any longer since 2001 and there are therefore no packages for major linux distributions teapot: offers packages for various operating systems, but uses for example counter-intuitive naming for cells (numbers for row and column, i.e. 11 seems to be intended to be row 1, column 1) and superfluous code for FLTK GUI Various Emacs modes also do not quote strings containing the delimiter well or are require much more typing for entering the scaffold of a table. Therefore I would be very grateful for overcoming one of theses drawbacks or any hints towards another console based CSV editor. It actually needn't do any calculations just editing cells or column- and rowise.

    Read the article

  • Installing Windows 7 from USB on a Thinkpad T61

    - by Halik
    I am trying to install Windows 7 Professional from USB 3.0 flashdrive, on a Thinkpad T61. The problem is, Thinkpads BIOS will not detect the flash drive as bootable medium, and won't allow to boot from it. What I did: Enabled USB BIOS Support in BIOS (it was on by default) In startup menu, added USB HDD to boot order (it has '-' sign in front of it) Created Windows 7 install media with UNetbootin, WinUSB (linux tool) dd and Grub4DOS. As you can tell, currently, I only have access to Linux machine to make the flashdrive. What happens: The T61 BIOS shows '-USB HDD' in boot order menu. The '-' sign suggests that the plugged flash drive is currently not bootable. The same flashdrive (with the same Windows image on it) is booting without any problems on a Dell D430 and Lenovo Y550. Also, Ubuntu 12.04 install USB created with Unetbootin shows as bootable ('+' sign in BIOS boot order menu) and boots from the F12 boot menu. Additional info thinkwiki.org says that some Thinkpad BIOSes do not use MBR on flashdrives. It suggests using Extended-IPL boot loader, but the provided links are broken and there seems to be no mirrors. Solution: http://superuser.com/a/430186/54970

    Read the article

  • Can I take my ReadyNAS drive in Raid1 and plug it straight into new different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS? Edit: Ok after further reading it seems that the ReadyNAS Duo formats my drive as ext3 in 16k blocks. There are instructions for mounting a drive into a linux box here: Mounting Sparc-based ReadyNAS Drives in x86-based Linux There is also talk about a linux image here: ReadyNAS Data Recovery - VMware recovery tool I'm not sure whether this means they ReadyNAS actually implements software raid under the hood, or what? So it appears like it IS do-able, but do any of you linux guru's know whether this is viable and whether the fact that they are in raid 1 affect matters?

    Read the article

  • NIC are not advertising Correct Speeds

    - by Squidly
    I have an IBM x336 that is not advertising the proper LINK speeds. One interface is the other is not. I've tried to Force it to 1000/Full but then it just shows link down. I have confirmed the switch is set to auto negotiate like my switches. I have also changed out my Ethernet Cables. I'm at a loss where to look further. I have verified that it will connect at 1G on a different swtich. This also has happened on two different servers on the same switch. This is my output from mii-tool -v for each interface. eth0: negotiated 100baseTx-FD, link ok product info: vendor 00:08:18, model 24 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD eth1: negotiated 1000baseT-FD flow-control, link ok product info: vendor 00:08:18, model 24 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • How do you do a keyword search the Services.msc (mmc) window in Windows 7?

    - by Warren P
    When you want to run a service, you have very limited capabilities, in all current Windows versions, as far as I can tell. I usually start Services by typing "services.msc" into the Start-Run box, on most versions of Windows, this works. I know how to click the "Name" column in the MMC view of Windows Services. If you know what the first few characters of a service name is, you can usually sort by the name, and type the prefix to scroll the list down (find Windows Search for example). This seems pretty weak to me, so I spent some time searching the interwebs for tools that do a better job of managing services. Usually I have a keyword that I know "fooWare" might be the keyword, and I need to find the (usually badly named) service and start it and stop it. This is often WAY too hard. The best I could do is "NET SERVICES" from the command line, and maybe add a grep in there, but that doesn't list every service, only a few of them. And the MMC snap-in in Win7 now has an Export List button, exporting to csv text file feature which I have used from time to time, to export and then search. I have thought of writing my own tool. I'm hoping a better "service manager" utility exists out there that sysadmins use. I'd like a search box at the top right corner, kind of the same way that the Add-Remove-Programs dialog in Win7 and Vista has a search facility. Does such a services utility exist out there?

    Read the article

  • Attempting to set up xampp and zend server on the same machine

    - by umbregachoong
    I am attempting to set up the zend server and xampp on the same machine but I am running into problems. I came across documentation on the zend site that said you cannot do this. However the folks over at apachefriends said you can. I have since discovered that I can run some of the zendframework examples within xampp by downloading the zendframework2 library and the skeleton app from git and I am doing this right now. However, I would like to know how to set them both up without having any conflicts both for the apache2 server and phpmyadmin. (One of the frustrating things is trying to load phpmyadmin in the deployment dialog by using the zpk tool in Zend). What I did in trying to set up both servers on windows 7 is as follows: First I have tried to set up the httpd conf files separately for each server, xampp running on port 8082 , and zend running on port 8088. At the time xampp would work, but zend server would not. This is after setting up the virtual host files separately for each server. Question 1: Where are the zend server error logs? Earlier, I was able to get both of them running configuring the xampp server httpd-conf file alone, however, I experienced problems with phpmyadmin even after configuring phpmyadmin on xampp to work on a different port other than 3306. Second question here: how to set up the two mysql phpmyadmin instances so they do not conflict with each other? Here is the xampp virtual host section: ##ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/" ServerName localhost 8082 ##ServerAlias www.dummy-host.example.com ##ErrorLog "logs/dummy-host.example.com-error.log" ##CustomLog "logs/dummy-host.example.com-access.log" common Here is the zend virtual host section: DocumentRoot "C:\Program Files (x86)\Zend\Apache2/htdocs" ServerName localhost:8088 </VirtualHost> I have looked at this httpd.apache.org/docs/2.2/vhosts/ and this http://survivethedeepend.com/zendframeworkbook/en/1.0/creating.a.local.domain.using.apache.virtual.hosts but I am obviously doing something wrong here. I also have the java sdk running on this machine with tomcat and apache and I have no conflicts- too bad this is not the case for zend server and xampp Thanks umbre gachoong

    Read the article

  • Ubuntu upgrade process failed

    - by Spin0us
    I tried to dist-upgrade my ubuntu server on my percona cluster but it failed with this message The following packages have unmet dependencies: libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33a+maria-1~precise) but it is not installable And here is the package listing # dpkg --list | grep -E 'percona|mysql' ii libdbd-mysql-perl 4.020-1build2 Perl5 database interface to the MySQL database iU libmysqlclient18 5.5.33a+maria-1~precise Virtual package to satisfy external depends ii mariadb-common 5.5.33a+maria-1~precise MariaDB database common files (e.g. /etc/mysql/conf.d/mariadb.cnf) ii percona-xtrabackup 2.1.5-680-1.precise Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-client-5.5 5.5.31-23.7.5-438.precise Percona Server database client binaries ii percona-xtradb-cluster-common-5.5 5.5.33-23.7.6-496.precise Percona Server database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-2.x 157.precise Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.5 5.5.31-23.7.5-438.precise Percona Server database server binaries ii php5-mysql 5.3.10-1ubuntu3.8 MySQL module for php5 During the install of the server, mariadb and galera cluster have first been installed. Then removed to be replaced by percona XtraDBCluster. So i think this is the source of the problem. But how can i resolve this without reinstalling all ? UPDATE 1 # apt-cache policy libmariadbclient18 libmariadbclient18: Installed: (none) Candidate: (none) Version table: 5.5.32+maria-1~precise 0 100 /var/lib/dpkg/status

    Read the article

  • Coming from Win XP to 7 and having new accessibility software problems

    - by Anonymous Jones
    I just switched from Windows XP Pro SP3 (32bit) to Windows 7 Ultimate (32bit) on a new PC. Now, both the new onscreen keyboard and a utility for sending mouse clicks are being problematic. The problem with 7's OSK is that some things I type only work intermittently or just dodgily. Like Alt+Tab with multiple Tabs, other Alt/Ctrl/Shift/Win key combinations, and the context menu key. Sometimes apps will not take focus for input at all. I use the OSK it in 'hover' mode, on 0,5 seconds. The clicking tool is Point-N-Click, which sends clicks when I dwell anywhere for 1.25 seconds with the mouse pointer. http://www.polital.com/pnc/ The problem with it is that sometimes it fails to click. Most often this happens in some of the control panel sections, on the taskbar, and when UAC pops up. It seems to occur in conjunction with OSK usage a bit too, I think. I'm using an Administrator account. DEP and UAC settings are default. What can I do to fix or work around either of these problems? I'm disabled so this really is killing usability.

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • Domain workstation acting up and I can't track it down.

    - by DevNULL
    I have a developer with a Windows XP (SP2) 64 bit machine. If the machine is left on overnight (or any period of time longer than 5-6 hours) it takes 2-3 minutes to open any local drive and his network drives are no longer accessible. Here's what the system logs report... Any Help BTW: The problem just started a week ago and nothing has changed on the domain controller / AD or his machine. --- ERROR 1 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5719 Date: 6/8/2010 Time: 9:17:26 AM User: N/A Computer: BFC1 Description: This computer was not able to set up a secure session with a domain controller in domain UR due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. ADDITIONAL INFO If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5e 00 00 c0 ^..A --- ERROR 2 The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {555F3418-D99E-4E51-800A-6E89CFD8B1D7} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19). This security permission can be modified using the Component Services administrative tool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- ERROR 3 Event Type: Error Event Source: RemoteAccess Event Category: None Event ID: 20106 Date: 6/8/2010 Time: 10:12:18 AM User: N/A Computer: BFC1 Description: Unable to add the interface {E76F0A78-7A0B-4EBB-A081-BA3BD452FC4C} with the Router Manager for the IP protocol. The following error occurred: Cannot complete this function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: eb 03 00 00 e...

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • Apple Magic Trackpad 3-Finger Drop Lag

    - by activestylus
    After enabling three-finger dragging for my Trackpad, I notice that it drags well, but when I release there is about 1-2 seconds of lag before it actually drops. I understand this is supposed to be a feature so when you run out of space to drag, you have time to move your hand. But, for those of us powerusers, who move really fast, this is a BUG, not a feature. There should be some way to turn it off! For some perspective, I personally own a Fingerworks trackpad as well (the company Apple bought to make the Trackpad) and it does not suffer this problem. Drops are instantaneous no matter what program I am in. This is hugely frustrating for me, because I thought I was upgrading here and Apple's version does not perform as well as the Fingerworks model (which I purchased in 2004) I actually made a short video illustrating the problem, and why it is so frustrating for anyone who uses the pad as an artistic tool. Anyone here face this problem? If not, how would you recommend that I address Apple directly about this? PS - Already looked at this thread and the conclusion does not help me. I do not have one-finger drag enabled. PPS - I understand that for most people this is not an issue because they use the 'click' feature of the Trackpad. However, after years of using Fingerworks and not having to click ever, I find that it slows me down.

    Read the article

  • Autodiscover service seems to reply with User Principal Name instead of email address

    - by Jeff McJunkin
    After this latest round of Windows updates (on 1/11/11, in fact) my Exchange 2007 server of course rebooted. This may have had the side effect of making any changes I'd inadvertently made take effect. Since then, the Autodiscover service in Exchange 2007 from Outlook 2007 seems to reply with the User Principal Name ([email protected] instead of [email protected]). I'm specifically seeing this from within the "Test Email AutoConfiguration" tool in Outlook (the UPN appears in the first text box labeled "E-mail") and when creating a new profile in Outlook. If I disregard the UPN and instead fill in my email address, Autodiscover works as expected and I can connect without issue. I've confirmed using ADSI Edit that the SMTP email address is properly set for my users. I even went a bit crazy and set the UPN to the email address using ADSI Edit. I've re-installed the Client Access role on the server in question. Exchange server is Server 2008, 64-bit of course. Clients are mostly XP 32-bit, though the issue happens from a Windows 7 machine as well.

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • How to best convert a fully encrypted drive into a Virtual Machine?

    - by SiegeX
    I have a Windows XP laptop that uses GuardianEdge's Encryption Plus to fully encrypt the drive from bootup. What I would like to do is install a much larger (unencrypted) hard drive with Windows 7 on it and turn this fully encrypted drive into a Virtual Machine that can be ran in either Virtualbox or VMWare on the Windows 7 host. I've read many howto's that talk about using an imaging tool like Acronis True Image to image the drive then passing that through VMWare's VCenter Converter to turn it into a format that VMWare can understand. Unfortunately this seems to all far apart when you are dealing with a fully encrypted drive because Acronis cannot recognize the file system and attempts to do a sector-by-sector copy of the entire hard drive. This is extremely wasteful since the drive is 120GB but the file system is only using 10GB of that. Even if I were OK with going with an inefficient 120GB sector-by-sector copy, I'm not sure that this would even work under VMWare or Virtualbox. Unfortunately, the Guardian Edge boot-time login comes up only after the hard drive has been selected as the boot device; preventing me from being able to decrypt the drive prior to booting an Acronis True Image CD so that it can recognize the underlying file system. I'm sure I'm not the first person to want to do this but I am having a heck of a time finding solutions to this problem. All suggested/answers welcomed. Thanks

    Read the article

  • How to pass alias through sudo

    - by Tanktalus
    I have an alias that passes in some parameters to a tool that I use often. Sometimes I run as myself, sometimes under sudo. Unfortunately, of course, sudo doesn't recognise the alias. Does anyone have a hint on how to pass the alias through? In this case, I have a bunch of options for perl when I'm debugging: alias pd='perl -Ilib -I/home/myuser/lib -d' Sometimes, I have to debug my tools as root, so, instead of running: pd ./mytool --some params I need to run it under sudo. I've tried many ways: sudo eval $(alias pd)\; pd ./mytool --some params sudo $(alias pd)\; pd ./mytool --some params sudo bash -c "$(alias pd)\; pd ./mytool --some params" sudo bash -c "$(alias pd); pd ./mytool --some params" sudo bash -c eval\ "$(alias pd)\; pd ./mytool --some params" sudo bash -c eval\ "'$(alias pd)\; pd ./mytool --some params'" I was hoping for a nice, concise way to ensure that my current pd alias was fully used (in case I need to tweak it later), though some of my attempts weren't concise at all. My last resort is to put it into a shell script and put that somewhere that sudo will be able to find. But aliases are soooo handy sometimes, so it is a last resort.

    Read the article

  • Windows 7 library nightmare

    - by Lobuno
    In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: * The Documents library is EMPTY * Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. * Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. * Deleting the library and auto-creating it doesn't solve the problem * The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. * The shared drives are on a W2008 indexed server... * Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >