Search Results

Search found 17567 results on 703 pages for 'muon package manager'.

Page 552/703 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Strange permission errors in new PostgreSQL installation

    - by Bart van Heukelom
    A freshly installed PostgreSQL (with configuration overwritten) won't start: $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * Error: could not read /etc/postgresql/9.1/main/postgresql.conf: Permission denied Looks like it should be able to read it though: $ ls -l postgresql.conf -rw------- 1 postgres postgres 19450 2012-06-14 10:07 postgresql.conf But fine, I'll add chmod +r it to test if that works. $ sudo chmod +r postgresql.conf $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * The PostgreSQL server failed to start. Please check the log output: Error: Could not open log file /var/log/postgresql/postgresql-9.1-main.log Huh? $ ls -l /var/log/postgresql/ total 4 -rw-r----- 1 postgres adm 827 2012-06-14 10:07 postgresql-9.1-main.log I don't get it. What can be wrong here? This used to work before. Can I maybe monitor what process attempts to open the file? It's Ubuntu 11.10 on EC2, using Chef. For completeness, here's the recipe: # Install PostgreSQL package "postgresql-9.1" # Stop server service "postgresql" do action :stop end # Overwrite configuration (setting data dir) template "/etc/postgresql/9.1/main/postgresql.conf" do source "postgresql-conf.erb" owner "postgres" group "postgres" end # Start server service "postgresql" do action :start end

    Read the article

  • SCOM, Server 2008 and SQL Server 2008

    - by Jacques
    Hi there, I'm trying to setup SCOM(System Center Operations Manager 2007 (SCOM) – Platform Monitoring) on my Server 2008 machine using SQL Server 2008 running on the same machine. When I check my prerequisites I get problem on SQL and Active Directory components. (I'm running SQL server 2008 and Server 2008 with active directory not installed) Errors: 1.Microsoft SQL Server 2005 Service Pack 1 required. Details: SQL Server 2005 SP1 is the next version of SQL Server. SQL Server 2005 Enterprise Edition, is a complete data and analysis platform for large mission-critical business applications. The link provided in the resolution column is a trial version of the product and is not supported by the Microsoft SQL Server team In order to install active directory needs to be present. Details:Setup failed to verify the presence of Active Directory for this server. I've got a couple of questions I need answering, hope someone can help. Do I need to install Active Directory for SCOM to work? Can I run SCOM with an SQL 2008 Database? How do I get pass these problems?

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

  • WmiPrvSE memory leak on Windows 2008 *R2*

    - by MichaelGG
    I've seen references on Windows 2008 to WmiPrvSE leaks, but nothing about Windows 2008 R2. We're running R2 on top of Hyper-V (2008). We are also running NSClient++ for monitoring from opsview. Over time, WmiPrvSE.exe starts to use a lot of memory, causing memory alert issues (less than 10% free). VM has 2GB, WmiPrvSE consumes up to 500-600MB before I kill it. Killing the process doesn't seem to have any negative effect; it starts up again and I haven't noticed any problems. But after a day or two, it's back in the same situation. Any ideas on what to do? Resource Monitor doesn't show any Disk or Network IO by WmiPrvSE.exe. Just slowly climbing private memory... Edited to add: We aren't running clustering, or Windows System Resource Manager. The only regular WMI user I can guess is NSClient++, but we don't seem to have this problem on other servers.

    Read the article

  • How do you import CA certificates onto an Android phone?

    - by f50driver
    Hi all, I want to connect to my University's wireless using my Nexus One. When I go to "Add Wi-Fi network" in Wireless Settings I fill in the Network SSID and select 802.1x Enterprise for the security and fill everything out. The problem is that our university's wireless uses Thawte Premium Server CA certificate for certification. When I click the drop down list for CA certificate I get nothing in the list (just N/A) Now I have the certificate (Thawte Premium Server CA.pem) and have moved it to my SD card, but it doesn't look like Android automatically detects it. Where should I put the certificate so that the Android wireless manager recognizes it. In other words, how can I import a CA certificate so that Android recognizes that it is on the phone and displays it in the CA Certificate drop down list. Thanks for any help, Tomek P.S. My phone is not rooted EDIT: After doing some research it looks like you are able to install certificates by going to your phone's settings Location & Security Install from SD card Unfortunately it looks like the only accepted file extension is .p12. It does not look like there is a way to import .cer or .pem files (which are the only two files that come with the Thawte certificates) at this moment. It does look like you can use a converter to convert your .cer or .pem files to .p12, however a key file is needed. https://www.sslshopper.com/ssl-converter.html I do not know where to get this key file for the Thawte certificates.

    Read the article

  • System Expandable-String Environment Variables Can’t Reference User Environment Variables

    - by Synetech inc.
    Hi, I’ve run into a bit of a situation with Windows environment variables. I’ve narrowed it down to what may or may not makes sense and/or possibly be by design. It seems that expandable-string environment variables of the local machine cannot reference environment variables of the current user. For example if you’ve got the following environment variables: [HKCU\Environment] "CU"="CU" "CU->LM"="%LM%" [HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment] "LM"="LM" "LM->CU"="%CU%" Then you get the following results: > set CU CU=CU CU->LM=LM > set LM LM=LM LM->CU=%CU% It seems that user variables can expand system variable references, but system variables cannot expand (access?) user variable references. I suppose that it makes sense if you think about it just right (eg like how user vars override/hide system vars of the same name), but it also doesn’t make sense if you think about it in even more ways. So what’s going on? Is there a way to get this to work as expected? Thanks.

    Read the article

  • Windows Server Backup - Recover only shows the latest backup

    - by Steffen
    We're having quite some trouble at work using Windows Server Backup. We have a HyperV server (Win 2008) running 8 virtual web servers, these are running a variety of OS'es: Win 2003, Win 2008 and a lone Debian. Each virtual server has a separate partition on the physical HyperV server, so e.g. E: is virtual server #1, F: is #2 and so forth. For backup we use Windows Server Backup, or more exactly we use the commandline tool: wbadmin.exe We need to make the backups without stopping the virtual servers, as we cannot afford the downtime (we've got users online both day and night), and Windows Server Backup offers to use the shadow copy provider to archive this. We run wbadmin like this: wbadmin start backup -backuptarget:\\remotebackuplocation\somefolder -include:E: -quiet We run it once per partition, because we've got a script wrapped around that command, for sending us an email about how it went. Each time we run wbadmin it'll delete the Backup xxxx folder it created in last backup, and just create a new. In order to prevent this from happening, we rename the backup xxx folder after each backup is run, before starting the next one. I realize we must rename it back to its original name prior to recovering, and we obviously do this. Now the issue is as follows: Even though we have all the backed up files, and rename whichever backup we want to use, to its original name, we can only see the latest backup in the Windows Server Backup GUI when we select "Recover". This means we can only recover the last partition we backup up, so e.g. E: can never be recovered. In other words we're screwed :-( My question is: Does anyone know how to use Windows Server Backup for a scenario like this ? Or is it simply not possible due to the simplicity of Windows Server Backup ? If it's not possible, could you recommend some backup software for this purpose ? We've already looked at MS' System Center Data Protection Manager, however it's quite expensive and the boss doesn't like that :-/

    Read the article

  • WinXP: Error 1167 -- Device (LPT1) not connected

    - by Thomas Matthews
    I am writing a program that opens LPT1 and writes a value to it. The WriteFile function is returning an error code of 1167, "The device is not connected". The Device Manager shows that LPT1 is present. I have a cable connected between a development board and the PC. The cable converts JTAG pin signals to signals on the parallel port. Power is applied and the cable is connected between the development board and the PC. The development board is powered on. I am using: Windows XP MS Visual Studio 2008, C language, console application, debug environment. Here is the relevant code fragments: HANDLE parallel_port_handle; void initializePort(void) { TCHAR * port_name = TEXT("LPT1:"); parallel_port_handle = CreateFile( port_name, GENERIC_READ | GENERIC_WRITE, 0, // must be opened with exclusive-access NULL, // default security attributes OPEN_EXISTING, // must use OPEN_EXISTING 0, // not overlapped I/O NULL // hTemplate must be NULL for comm devices ); if (parallel_port_handle == INVALID_HANDLE_VALUE) { // Handle the error. printf ("CreateFile failed with error %d.\n", GetLastError()); Pause(); exit(1); } return; } void writePort( unsigned char a_ucPins, unsigned char a_ucValue ) { DWORD dwResult; if ( a_ucValue ) { g_siIspPins = (unsigned char) (a_ucPins | g_siIspPins); } else { g_siIspPins = (unsigned char) (~a_ucPins & g_siIspPins); } /* This is a sample code for Windows/DOS without Windows Driver. */ // _outp( g_usOutPort, g_siIspPins ); //---------------------------------------------------------------------- // For Windows XP and later //---------------------------------------------------------------------- if(!WriteFile (parallel_port_handle, &g_siIspPins, 1, &dwResult, NULL)) { printf("Could not write to LPT1 (error %d)\n", GetLastError()); Pause(); return; } } If you believe this should be posted on Stack Overflow, please migrate it over (thanks).

    Read the article

  • Why does my PowerShell script hang when called in PSEXEC via a batch (.cmd) file?

    - by Kev
    I'm trying to remotely execute a PowerShell script using PSEXEC. The PowerShell script is called via a .cmd batch file. The reason we do this is to change the execution policy, run the powershell script then reset the execution policy again: On the remote server do-tasks.cmd looks like: powershell -command "&{ set-executionpolicy unrestricted}" powershell DoTasks.ps1 powershell -command "&{ set-executionpolicy restricted}" The PowerShell script DoTasks.ps1 just does this for now: Write-Output "Hello World!" Both of these scripts live in c:\windows\system32 (for now) just so they're on the PATH. On the originating server I do this: psexec \\web1928 -u administrator -p "adminpassword" do-tasks.cmd When this runs I get the following response at the command line: c:\Windows\system32>powershell -command "&{ set-executionpolicy unrestricted}" and the script runs no further. I can't ctrl-c to break the script and I just see ^C characters, I can type input from the keyboard and the characters are echoed to console. On the remote server I see that PowerShell.exe and CMD.exe are running in Task Manager's Process tab. If I end these processes then control returns to the command line on the originating server. I have tried this with just a simple .cmd batch file with a @echo hello world and it works just fine. Running do-tasks.cmd on the remote server via an RDP session works ok as well. Why is my remote batch file getting stuck when executing via PSEXEC?

    Read the article

  • SSL certificate on IIS 7

    - by comii
    I am trying to install a SSL certificate on IIS 7. I have download a free trial certificate. After that, this is the steps what I do: Click the Start menu and select Administrative Tools. Start Internet Services Manager and click the Server Name. In the center section, double click on the Server Certificates button in the Security section. From the Actions menu click Complete Certificate Request. Enter the location for the certificate file. Enter a Friendly name. Click OK. Under Sites select the site to be secured with the SSL certificate. From the Actions menu, click Bindings.This will open the Site Bindings window. In the Site Bindings window, click Add. This opens the Add Site Binding window. Select https from the Type menu. Set the port to 443. Select the SSL Certificate you just installed from the SSL Certificate menu. Click OK. This is the step where I get the message: One or more intermediate certificates in the certificate chain are missing. To resolve this issue, make sure that all of intermediate certificates are installed. For more information, see http://support.microsoft.com/kb/954755 After this, when I access the web site on its first page, I get this message: There is a problem with this website's security certificate. What am I doing wrong?

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: RMAN run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraRacDBName=BBDB, CVOraRACDBClientName=BBDB)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; }2 3 4 allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • ADODB DB2 DSN using IBMDADB2 provider

    - by Eli Sand
    I have a very bizarre issue with trying to establish a working connection to an IBM DB2 server from Classic ASP using ADODB. On my development server I am running IIS and have a local instance of DB2 running. When I create a system DSN on this server and try to connect to it with ADODB, I have to specify Provider=IBMDADB2; in my connection along with the DSN name - failure to include the provider and my connection won't work. On my production server(s), I have one running IIS and a second system running an instance of DB2. When I create a system DSN on the production IIS server and try to connect to it with ADODB, I cannot specify the provider, otherwise it throws an uncatchable error in an external module (I assume it's referring to the DB2 module) if I try to do anything past get a connection (oddly, opening the connection itself doesn't throw an error - but if I run a query it does). If I remove the Provider=IBMDADB2; from the connection string (thus I just have DSN=some_name), it works fine. On both systems I can verify through the ODBC connection manager that the DSN's work and can connect to the databases, and on both systems I have made sure to set the correct (only) instance of DB2 as the default. Can anyone tell me why I have to have different connection strings for the development and production servers? I would like to be able to use the same connection string for both environments if at all possible. If that means either specifying a provider for both, or for neither I don't care which - I would just like to know what's going on and how to fix it.

    Read the article

  • Remote App Authentication Error (Code:0x507)

    - by CMK
    Hi I'm trying to get RDP services running with Windows 2008 R2. I'm at a WINXP SP3 client that was modified to run RDP with NLA. When I start the client connect to the local DC and get an authentification error (Code 0x507). I've already done the following: • Server Setup to run as a standalone local "DC" to provide Terminal services to a single application. Remote Desktop Session Host CAL License is running & operational, RD Gateway Manager w/ Local Server RAP & CAP running NLA & operational etc..... Server has NLA & temporary use of port 3389 (which is directly connected to and accessible from the internet (I am planning to change the port to 443, but want to get the current system running first). • XP Client(s): RDP-Version on win xp clients is 6.1 If had SP2, then added SP3 and edited the registry settings to allow NLA, by: Click Start, click Run, type regedit, and then press ENTER. In the navigation pane, locate and then click the following registry subkey: HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Control\Lsa In the details pane, right-click Security Packages, and then click Modify. In the Value data box, type tspkg. Leave any data that is specific to other SSPs, and then click OK. In the navigation pane, locate and then click the following registry subkey: HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Control\SecurityProviders In the details pane, right-click SecurityProviders, and then click Modify. In the Value data box, type credssp.dll. Leave any data that is specific to other SSPs, and then click OK. Exit Registry Editor. Restart the computer.

    Read the article

  • Primary IDE Channel: Ultra DMA Mode 5 >> PIO Mode

    - by Wesley
    Hi, my netbook was having huge audio lag and just abnormally slow processing. After doing some searching on the internet, I found out that I needed to uninstall/reinstall the Primary IDE Channel found under the IDE controller section in the Device Manager. I would then set the Transfer Mode to DMA if available and everything would be great. For a period of time, I would see that "Ultra DMA Mode 5" was the current transfer mode, but every so often, it'd revert back to "PIO Mode", which is when it's really laggy. What can I do to prevent the Primary IDE Channel to revert from Ultra DMA Mode to PIO Mode? Also, my netbook has BSODed a few times when it is in PIO Mode, without any real explanation. I have a Samsung N120. Specs are as follows: http://www.samsung.com/ca/consumer/office/mobile-computing/netbook/NP-N120-KA01CA/index.idx?pagetype=prd_detail&tab=spec&fullspec=F. Only difference is that I have upgraded to 2.0 GB of DDR2 RAM. EDIT: For all who are looking for an answer to this problem, click the link in Kythos's answer and look at number 6 (Re-enable DMA using the Registry Editor). This always works for me now. If on reboot, you seem to only have a black screen after XP is loading, just wait... it is still loading and will show signs of life after 2-3 minutes.

    Read the article

  • How do I upgrade PHP on CentOS and Kloxo?

    - by Emerson
    I need to upgrade PHP so that I can upgrade joomla on my dedicated server. I have: kloxo 6.1.6 php-5.2.17-1 Linux CentOS-55-64-minimal 2.6.18-194.32.1.el5 x86_64 x86_64 x86_64 GNU/Linux I searched everywhere and I could only find that PHP 5.3 isn't compatible with zend. I would like to upgrade to 5.2.4, which is the minimum for joomla 1.6 and 1.7. I tried to run: yum update php.x86_64 Which is the PHP package installed, but it didn't work. This is a production server with quite a few users across many sites, so I wanted to do it as safely as possible. Is it safe to run "yum update"? It showed me 6 packages to install and 125 packages to update, including a kernel. Is that safe? I haven't touched kloxo's yum repositories. Update: I just successfully ran "yum update". Now I think I need to know how to add a new repository that has the 5.2.4 and how to update to that specific version. Any ideas?

    Read the article

  • Having trouble redirecting frevvo using mod_proxy

    - by user38859
    This question is similar to this: http://serverfault.com/questions/102868/how-to-access-webservers-running-on-ports-blocked-on-companys-network Basically, I'm using confluence and a plugin called frevvo. Confluence sits on port 8080 while frevvo sits on port 8082. I want to redirect both of them to port 80 via Apache HTTP web server so that it doesn't get blocked by company proxies. I've been using the document on Atlassian that shows me how to run confluence behind Apache (I can't post a second URL due to being a newbie here) I've successfully redirected Confluence from port 8080 to port 80 so I can now access Confluence using www.example.com/confluence. Now I tried doing the same thing to frevvo with the following configurations: Apache httpd: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /confluence http://localhost:8080/confluence ProxyPassReverse /confluence http://localhost:8080/confluence <Location /confluence> Order allow,deny Allow from all </Location> ProxyPass /frevvo http://localhost:8082/ ProxyPassReverse /frevvo http://localhost:8082/ <Location /forms> Order allow,deny Allow from all </Location> And in server.xml for the frevvo Tomcat instance, I added the following within <Host> tag: <Context path=" " docBase="" debug="0" reloadable="false"> <!-- Logger is deprecated in Tomcat 5.5. Logging configuration for Confluence is specified in confluence/WEB-INF/classes/log4j.properties --> <Manager pathname="" /> </Context> The plugin, frevvo, when accessed through the browser using http://localhost:8082 usually redirect to http://localhost:8082/frevvo/web With the above configuration, when accessing www.example.com.au/frevvo redirects to www.example.com/frevvo/web/static/login - which doesn't work. I hope the above details is clear and appreciate anyone who could give us some insight.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Ubuntu 12.04 Preseed LDAP Config

    - by Arturo
    I'm trying to deploy Ubuntu 12.04 via xCAT, everything works except the automatic configuration of LDAP, the preseed file is read but the file /etc/nsswitch is not written properly. My Preseed File: [...] ### LDAP Setup nslcd nslcd/ldap-bindpw password ldap-auth-config ldap-auth-config/bindpw password ldap-auth-config ldap-auth-config/rootbindpw password ldap-auth-config ldap-auth-config/binddn string cn=proxyuser,dc=example,dc=net libpam-runtime libpam-runtime/profiles multiselect unix, ldap, gnome-keyring, consolekit, capability ldap-auth-config ldap-auth-config/dbrootlogin boolean false ldap-auth-config ldap-auth-config/rootbinddn string cn=manager,dc=xcat-domain,dc=com nslcd nslcd/ldap-starttls boolean false nslcd nslcd/ldap-base string dc=xcat-domain,dc=com ldap-auth-config ldap-auth-config/pam_password select md5 ldap-auth-config ldap-auth-config/move-to-debconf boolean true ldap-auth-config ldap-auth-config/ldapns/ldap-server string ldap://192.168.32.42 ldap-auth-config ldap-auth-config/ldapns/base-dn string dc=xcat-domain,dc=com ldap-auth-config ldap-auth-config/override boolean true libnss-ldapd libnss-ldapd/clean_nsswitch boolean false libnss-ldapd libnss-ldapd/nsswitch multiselect passwd,group,shadow nslcd nslcd/ldap-reqcert select ldap-auth-config ldap-auth-config/ldapns/ldap_version select 3 ldap-auth-config ldap-auth-config/dblogin boolean false nslcd nslcd/ldap-uris string ldap://192.168.32.42 nslcd nslcd/ldap-binddn string [...] After the installation, nsswitch.conf rimains unchanged. Has someone an idea?? Thanks!

    Read the article

  • pGina with automatic Kerberos ticket and OpenAFS token/ticket

    - by rolands
    I am currently updating our educational Windows lab images from XP to 7, In doing so we are also migrating from Comtarsia to pGina. Unfortunately somewhere in the transition our automation that fetched kerberos and OpenAFS tickets/tokens on login has completely stopped functioning. Basically what used to happen was, using kfw-3.2.2 and the old OpenAFS release (loopback adapter days), either comtarsia would share password or something with the NIM (Network Identity Manager) which would authenticate against the kerberos server gaining a ticket and AFS token needed to access the users file, this was aided by the fact that our ldap database that windows authenticates against is also what kerberos uses to authenticate so usernames/passwords are the same across both services. I have set up all of the tools, albeit newer 64bit versions which seem to have given me less trouble than the previous releases of NIM/OpenAFS/Krb5, as well as setting their configurations back to what we used to use. Unfortunately this seems to be fubar'd in some way, instead all we get now is a OpenAFS token, most likely I assume from the AFScreds tool which operates some kind of integrated login process, although this does not help in getting a kerberos ticket or a afs ticket for which a login box is provided be NIM after the user logs in. Does anyone know IF it is possible to do what we are trying, and if so how? I was considering writing a pGina plugin which would interact with the server itself but this seems slightly like overkill considering that all these applications already exist...

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • Unable to install PHP-FPM on Apache (Failed to connect to FastCGI server)

    - by Nyxynyx
    I have been having problem installing php-fpm for use with apache2-mpm-worker. This is the guide that I am following. According to the guide's Step 5, Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization However I cannot find php5-fcgi at /usr/lib, but only /usr/bin/php5-cgi and /usr/bin/php-cgi, which I am not sure if they are the same. So I changed the lines in Step 5 to: Alias /php5-fcgi /usr/bin/php5-fcgi FastCgiExternalServer /usr/bin/php5-fcgi -host 127.0.0.1:9000 -pass-header On restarting Apache, it's logs gave the errors: [notice] caught SIGTERM, shutting down [alert] (4)Interrupted system call: FastCGI: read() from pipe failed (0) [alert] (4)Interrupted system call: FastCGI: the PM is shutting down, Apache seems to have disappeared - bye [notice] Apache/2.2.22 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations [notice] FastCGI: process manager initialized (pid 16348) And on loading the index page [error] [client 10.0.2.2] (111)Connection refused: FastCGI: failed to connect to server "/usr/bin/php5-cgi": connect() failed [error] [client 10.0.2.2] FastCGI: incomplete headers (0 bytes) received from server "/usr/bin/php5-cgi" [error] [client 10.0.2.2] File does not exist: /var/www/mydomain/public/favicon.ico Question: Any idea why php5-fcgi is missing, and how should this problem be fixed? Thank you!! :)

    Read the article

  • How to install PHP5.3 and SQLite3 on Ubuntu 8.04

    - by richard
    Hello, I got a Ubuntu Hardy VPS and I am trying to install PHP5.3 with SQLite. I added the dotdeb PHP5.3 repository and succeeded in installing PHP5.3. But I need to install SQLite as well. When I'm trying to install php5-sqlite3 (sudo aptitude install php5-sqlite3) this is the output: The following packages are BROKEN: php5-sqlite3 The following NEW packages will be automatically installed: php-db php-pear php-sqlite3 The following NEW packages will be installed: php-db php-pear php-sqlite3 0 packages upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 460kB of archives. After unpacking 3027kB will be used. The following packages have unmet dependencies: php5-sqlite3: Depends: phpapi-20060613 which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Remove the following packages: libapache2-mod-php5 php5 php5-mysql Install the following packages: php-pear [5.2.4-2ubuntu5.10 (hardy-updates, hardy-security)] Downgrade the following packages: php5-cli [5.3.1-0.dotdeb.1 (<NULL>, now) -> 5.2.4-2ubuntu5.10 (hardy-updates, hardy-security)] php5-common [5.3.1-0.dotdeb.1 (<NULL>, now) -> 5.2.4-2ubuntu5.10 (hardy-updates, hardy-security)] php5-suhosin [5.3.1-0.dotdeb.1 (<NULL>, now) -> 0.9.22-1 (hardy)] Score is 197 Accept this solution? [Y/n/q/?] Obviously, downgrading PHP is not an option. Please help me! If upgrading the server to a newer release of Ubuntu makes things easier, that's not a problem.

    Read the article

  • PHP-APC Installation

    - by Leo
    Trying to get my head around the way to install APC cache on PHP 5.3.13. That's a VPS with apache, configured preferably through whm/cpanel (although not only). I read a bunch of articles where it was suggested to use FastCGI with APC, as suPHP doens't do well with opcode caching, and fcgid_module doesn't do it right for APC either. Noted that fcgid_module is a newer package than FastCGI and that's what whm/cpanel installs for you but ok, that can be solved I guess. Then I'm reading that php-fpm is a much better alternative to manage the php processes, especially for APC. Ok. Then I realised that php-fpm is included in php core since 5.3 and got confused. Does that mean I don't have to use FastCGI/fcgid_module (and what should I use instead of them - mod_php or cgi?)? Or does that mean that I still need to get the older FastCGI module, and configure it to use one process per user (or just one process?)? Or would fcgid_module work as well? And how bad would it be just to go with mod_php/APC to avoid troubles of installing php-fpm and FastCGI (whm/cpanel doesn't support neither) given than Varnish would serve most of the static content anyway - no php process need to be created for static content. Any examples of their FastCGI/fcgid_module/php-fpm/APC configurations would be greatly appreciated as well.

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >