Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 555/761 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Two domains, two servers, one dynamic IP address

    - by giantman
    I have two domains hi.org and bye.net and one dynamic IP address and two servers. I want to attach one domain bye.net to server1 and hi.org to server2. I'm using Apache wamp 2.0i. I have two servers behind one router with a dynamic IP address #httpd.conf file additions <IfModule mod_proxy.c> ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> </IfModule> #vhost file additions NameVirtualHost *:80 #default <VirtualHost *:80> DocumentRoot "c:/wamp/www/fallback" </VirtualHost> # Server 1 <VirtualHost *:80> DocumentRoot "c:/wamp/www" ServerName h**p://bye.net ServerAlias bye.net </VirtualHost> # Server 2 <VirtualHost *:80> ProxyPreserveHost On ProxyPass / h**p://192.168.1.119/ DocumentRoot "g:/wamp/www" ServerName h**p://hi.org ServerAlias hi.org </VirtualHost> After doing all this I fallback to server1 only I don't get the page hi.org I only get the page bye.net, I don't even get the default fallback page which gets executed when a person enters IP address but not the domain name. I use Windows 7 (server 2) and Windows XP (server 1) UPDATE: I needed to remove DocumentRoot "g:/wamp/www" line :D it was there by mistake! things are working fine now. But one thing: the URL gets replaced by the local ip address any way to not make that happen?

    Read the article

  • Is there a simple LDAP-to-HTTP gateway out there?

    - by larsks
    We have a local LDAP directory that provides basic contact information about our user community. We would like to integrate this into some third-party hosted services that allow us to implement widgets that run arbitrary Javascript. In order to connect Javascript to our LDAP directory, I would like to set up a simple LDAP-to-HTTP proxy that would accept HTTP GET requests, translate them into an appropriate LDAP query, and respond with directory information as JSON-encoded data. In an ideal world, something like this: GET /[email protected] Would get me something like this: { "cn": "Bob Person", "title": "System Administrator", "sn": "Person", "mail": "[email protected]", "telepehoneNumber": "617-555-1212", "givenName": "Bob" } (And this obviously assumes that the web application has locally configured information about what base DN to use, how to authenticate, etc). I guess I could write one...but surely something like this already exists? UPDATE The consensus seems to be that there isn't a pre-existing solution out there and that I should just get off my lazy derriere and write one. So I did, and it's here. It's not especially pretty, but it works for my prototyping and I figure maybe someone else will find it useful someday.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Nagios Apache Config with PHP-FPM downloading cgi files

    - by tubaguy50035
    I'm trying to setup Nagios 3 under Apache 2.4 with PHP-FPM. I've run into a couple problems I could use help with. The PHP side of things seems to be working, I can see the home page and the sidebar. But all of the CGI files are downloading instead of executing, and when I try to click on "Read What's New In Nagios Core 3", I get an error /nagios3/docs/whatsnew.html was not found on this server. Below is my vhost config for Nagios. <VirtualHost *:300> # apache configuration for nagios 3.x ScriptAlias /cgi-bin/nagios3 /usr/lib/cgi-bin/nagios3 ScriptAlias /nagios3/cgi-bin /usr/lib/cgi-bin/nagios3 # Where the stylesheets (config files) reside Alias /nagios3/stylesheets /etc/nagios3/stylesheets # Where the HTML pages live Alias /nagios3 /usr/share/nagios3/htdocs ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/usr/share/nagios3/htdocs/$1 <DirectoryMatch (/usr/share/nagios3/htdocs|/usr/lib/cgi-bin/nagios3|/etc/nagios3/stylesheets)> Options FollowSymLinks ExecCGI AllowOverride AuthConfig Order Allow,Deny Allow From All AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios3/htpasswd.users require valid-user </DirectoryMatch> <Directory /usr/share/nagios3/htdocs> Options +ExecCGI </Directory> </VirtualHost> I also added this in my global Apache config: AddHandler cgi-script .cgi Any help or instructions you can give me would be much appreciated. If more information is needed, let me know.

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • ESXi 4.0 Guests Locking up

    - by Brendan Sherwin
    I installed ESXi 4.0 on an HP Proliant g5 with a 64bit Xeon processor and took advantage of the free license as I work for a public school. I created two instances of server 2003 from scratch, one to be the DC, DHCP, the other to be a file server and DNS/DHCP backup. I had both guests up and running fine, setup my user accounts, transferred the data, etc etc. Once I joined a client machine to the domain, I would find that both of my Windows guests would lock up. Sometimes it would be for five or so minutes, once it was overnight. The "locked up" state means that as far I could tell, all services were stopped; dhcp no longer handed out IP's, DNS stopped working, I couldn't RDP into the server. The ESXi host, my HP server, was still running fine. VSphere was working, and I could look at the performance of the individual guests.I would try Powering off the hosts from inside VSPhere, and the hosts would start powering off, but get stuck at 95%, and stay that way, sometimes only for 10 minutes, others for hours. Several times I had to restart ESXi from it's console in order to restart my machines. Now, can anyone tell me what is happening, and how I can fix it, or take steps to prevent it? I hired a consultant to come take a look at it, someone who's experience and knowledge I trust, and he told me he had never seen anything like this ever before. He spoke to a friend of his who is VM certified, and he also said he had never heard of this issue. Thanks for your replies, and I'll do my best to respond ASAP. Currently, the server is powered off, and I've reinstituted my nine year old Server 2000 boxes, and I'm considering installing ESXi 3.5. Does anyone know a host created in 4.0 will work in 3.5? I'd really like to avoid having to rebuild those accounts! I know 4.0 works on this server, as I have another server in another school with the same exact hardware running 4.0 fine. Brendan

    Read the article

  • iTunes Home Sharing only works one way between 2 WinXP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • hMail server - sending copy of an e-mail changing the sender

    - by Beggycev
    Dear All please help me with following request. I am using hMail server in a company(test.com) and have several hundred of guest e-mail accounts ([email protected]). I need to accomplish this: When any of the guest e-mails receives a message(either from internal or external sender) this e-mail(or its copy) is sent to another address "[email protected]" which is the same for all of these guest e-mails. But I need the sender to be identified as the [email protected] not as the original sender which happens when I use forwarding. I tried to prepare a simple VBS script using the OnAcceptMessage event to accomplish this. and it works on my testing machine without internet connectivity but not in the production environment. To be specific, if I send an e-mail to [email protected] in my test env it is delivered to the [email protected] with [email protected] being a sender. But in the production env the e-mail stays in the guest mailbox with the original sender. I am interested in any solution, using a rule in hMail or script, anything is welcome. Thank you for any help! The script: Sub OnAcceptMessage(oClient, oMessage) 'creating application object in order to perform operations as hMail server administrator Dim obApp Set obApp = CreateObject("hMailServer.Application") Dim adminLogin Dim adminPassword 'Enter actual values for administrator account and password 'CHANGE HERE: adminLogin = "Admin_login" adminPassword = "password" Call obApp.Authenticate(adminLogin, adminPassword) Dim addrStart 'Take first 5 characters of recipients address addrStart = Mid(oMessage.To, 1, 5) 'if the recipient's address start with "guest" if addrStart = "guest" then Dim recipient Dim recipientAddress 'enter name of the recipient and respective e-mail address() 'CHANGE HERE: recipient = "FINAL" recipientAddress = "[email protected]" 'change the sender and sender e-mail address to the guest oMessage.FromAddress = oMessage.To oMessage.From = oMessage.To & "<" & oMessage.To & ">" 'delete recipients and enter a new one - the actual mps and its e-mail from the variables set above oMessage.ClearRecipients() oMessage.AddRecipient recipient, recipientAddress 'save the e-mail oMessage.save end if End Sub

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • Starfield Wildcard SSL Certificate Not Trusted in All Browsers

    - by Austen Cameron
    I am at a loss as to what else I might try in order to debug this issue with a Starfield Wildcard SSL Certificate. The problem is that in certain browsers (Safari or the most-updated chrome you can get for OS X 10.5.8 for example) the certificate comes up as untrusted, even on the root domain. My server setup / background info: General LAMP setup - CentOS 6.3 - on a Godaddy VPS Starfield Technologies Wildcard SSL certificate Installed using the instructions from godaddy's support pages ssl.conf lines are basically as follows: SSLCertificateFile /path/to/cert/mysite.com.cert SSLCertificateKeyFile /path/to/cert/mysite.key SSLCertificateChainFile /path/to/cert/sf_bundle.crt Everything seemingly worked fine until the other night when I noticed the problem in OS X, I assume it's more browser version related, but have only been able to replicate it on that particular machine. What I have tried: Updating sf_bundle.crt from godaddy's cert repository and Starfield's repository versions Following This ServerFault answer from Jim Phares - changing the ChainFile line to sf_intermediate.crt from Starfield's repository Using http://www.sslshopper.com/ssl-checker.html on my url It says the domain is correctly listed on the certificate but comes up with an error that reads The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. What might I try next to remedy the untrusted certificate issue? Let me know if there is any other information needed that might help debugging this issue. Thanks in advance!

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Can you "swap" the Sysprep answer file in Windows 7

    - by Ben
    I have a load of new Lenovo laptops which I am due to distribute in my company. We are distributed in multiple locations and I want to ship the laptops "boxed" and untouched by IT hand for distribution. We are using LANDesk to do all the software distribution and provisioning, but are currently falling at the first hurdle as when booted, the laptops kick into the Lenovo mini-setup wizard. I assume this is because they have been sysprepped at Lenovo. In order to keep with our (almost) zero touch strategy I want the users to PXE boot into a PE of some sort, which will run a script on startup which replaces the sysprep answer file with one of my own. (i.e. prepopulated with product key, company info etc.) and then reboot to complete Sysprep. The plan is that this will run, and then install the LANDesk agent as a post-sysprep task, which in turn will complete the provisioning. Anyone have any experience / know any pitfalls to look out for / can suggest a suitable, PXE-bootable PE environment? Apologies for the verbosity of the question - it takes a bit of explaining! Thanks in advance, Ben

    Read the article

  • BOINC error code -1200 when opening...

    - by Erik Vold
    I installed boinc 6.10.21 on my macosx 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use boinc, and I said 'yes' to this. Then when I tried to open boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • Windows Server 2008 SMTP & POP3 Configuration

    - by Alex Hope O'Connor
    This is the first time I have ever configured a VPS server without 3rd party applications such as Plesk control panel. I have got most functionality working in all my websites except I am very unsure as to how I can setup my email functionality on this new server. Basically I want the standard POP3 functionality, a bunch of accounts with private boxes, all able to send and receive emails using their individual usernames and passwords. My server setup is pretty simple, its a VPS with IIS & DNS Server running. What I have tried to do to setup SMTP & POP3 is adding the SMTP Server feature through the Server Manager Console (very unsure of the configuration as guides I found did not explain), I then installed a 3rd party application called Visdeno SMTP Extender as it claims to be a POP3 service providing accounting and the ability to communicate with email clients. That is as far as I have gotten as I can not seem to find too much information on the subject. So can someone please tell me how to go about configuring these services in order to provide standard SMTP & POP3 functionality? Thanks, Alex.

    Read the article

  • Issues with creating USB bootable Mountain Lion

    - by Sidd
    I am trying to set up a triple boot Windows 8, Mountain Lion, and Ubuntu. I am stuck though. I have got Windows 8 on a partition, and I am trying to get Mountain Lion on there at this point. I installed a VMware with a Snow Leopard 10.6.2 image on the Windows 8 platform. I used the disk utility in this program in order to get Mountain Lion on there. This is what i did specifically: I got the installesd.dmg. I 'mounted' that file or whatever you call it, and out came something along the lines of "Install Mountain Lion OS x" (something like that - it was like a submenu under the installesd.dmg in the disk utility). I got my PNY 8 gb Attache Flash Drive and went to the Erase tab of disk utility. I erased it using the Mac OS Extended (Journaled) setting and called it "Mac". I went to the Restore tab, dragged "Mac" into destination, and dragged "Install Mountain Lion OS x" to the source. Everything seemed to go well, but it didn't. When trying to boot from the flash drive (and yes, I set the BIOS correctly), it skipped it, and loaded Windows 8 normally as if nothing was plugged in. When I try looking at the flash drive in windows 8, it comes up as a 200 mb capacity drive labeled "EFI" with nothing in it (remember, it was 8gb in the beginning). I downloaded Plop Boot Manager, but it did not recognize a USB being plugged in. Does anyone know how I could fix this?

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • Ntop monitoring - Hosts visible with no SPAN/mirroring

    - by Cory J
    I am attempting to use ntop to monitor traffic over a Cisco Catalyst switch. I was assuming that in order to see any of the traffic, I'd have to use monitor, as described here: http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a008015c612.shtml. Howver, before I did anything on the switch, I simply plugged my ntop server in and fired up ntop. To my suprise, I instantly see 3+ pages of hosts, and thousands of packets. How is ntop seeing this? I have verified that no monitoring exists on the switch (run as en): cs1.pvdc#show monitor No SPAN configuration is present in the system. My ntop server is Ubuntu 8.04, I haven't done ANY configuration, I just installed the ntop package. This is also a fresh Ubuntu install. Is there anything else on my switch besides "monitor" that might cause my switch to mirror all its traffic like this? I've tried plugging ntop into different ports with the same results. UPDATE: It appears to be more then just broadcast traffic showing up in ntop, for example, I can see when my IPs have talked to the DNS server or generated HTTP traffic. If my switch is misconfigured, can anyone point me in the right direction towards rectify this? Not a Cisco expert.

    Read the article

  • WIM2VHD failing with "Cannot derive Volume GUID from mount point."

    - by Jacob
    I'm trying to use WIM2VHD according to the instructions on Scott Hanselman's blog post to create a Sysprepped VHD image to boot from. I've installed the WAIK, and I have my Windows 7 sources mounted as a virtual drive. When I try to run WIM2VHD like this: cscript WIM2VHD.wsf /wim:F:\sources\install.wim /sku:Ultimate /vhd:E:\WindowsSeven.vhd /size:30721 I get the following log: Log for WIM2VHD 6.1.7600.0 on 11/2/2009 at 10:51:18.16 Copyright (C) Microsoft Corporation. All rights reserved. MACHINE INFO: Build=7600 Platform=x86fre OS=Windows 7 Ultimate ServicePack= Version=6.1 BuildLab=win7_rtm BuildDate=090713-1255 Language=en-ZA INFO: Looking for IMAGEX.EXE... INFO: Looking for BCDBOOT.EXE... INFO: Looking for BCDEDIT.EXE... INFO: Looking for REG.EXE... INFO: Looking for DISKPART.EXE... INFO: Session key is E01E1ED7-C197-4814-BDE4-43B73E14FCC4 INFO: Inspecting the WIM... INFO: Configuring and formatting the VHD... ******************************************************************************* Error: 0: Cannot derive Volume GUID from mount point. ******************************************************************************* INFO: Unmounting the VHD due to error... WARNING: In order to help resolve the issue, temporary files have not been deleted. They are in: C:\Users\Jacob\AppData\Local\Temp\WIM2VHD.WSF\E01E1ED7-C197-4814-BDE4-43B73E14FCC4 *emphasized text*Summary: Errors: 1, Warnings: 1, Successes: 0 INFO: Done. Any ideas?

    Read the article

  • Resize Debian in VirtualBox

    - by Poni
    I have a VM with one HD of size 3GB and I'd like to enlarge its HD to 7GB. So I execute this command on the host (while guest is shutdown): VBoxManage modifyhd debian.vdi --resize 7168 Then I run the guest, Debian 6, and then: smith@debian6:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 2.8G 2.6G 60M 98% / tmpfs 61M 0 61M 0% /lib/init/rw udev 57M 160K 57M 1% /dev tmpfs 61M 0 61M 0% /dev/shm smith@debian6:~$ sudo parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 3221MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 3035MB 3034MB primary ext3 boot 2 3036MB 3220MB 185MB extended 5 3036MB 3220MB 185MB logical linux-swap(v1) smith@debian6:~$ cat /proc/partitions major minor #blocks name 8 0 3145728 sda 8 1 2962432 sda1 8 2 1 sda2 8 5 180224 sda5 So, no automatic resizing (detection) of the HD/partition (while VirtualBox, in the host, shows it's 7GB now). Ok... Then I do: smith@debian6:~$ sudo resize2fs /dev/sda1 resize2fs 1.41.12 (17-May-2010) The filesystem is already 740608 blocks long. Nothing to do! smith@debian6:~$ sudo parted GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) select /dev/sda1 Using /dev/sda1 (parted) resize WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Partition number? 1 Start? 0 End? [3034MB]? Here I'm stuck. At the above parted it asks me to resize to 3GB. No point in that, right.. What should I do in order to enlarge this partition?

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >