Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 383/618 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • How to fix out the error dpkg: error processing colord (--configure):

    - by ranjitpradhan
    I have upgrade my ubuntu from 11.10 to 12.04. at last i can found that when i tries to install some packages it shows a error. after reading some blog i tried to fix that error by "sudo dpkg --configure -a". but when i run this command it show another error this Setting up colord (0.1.16-2) ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /var/lib/colord -g colord -s /bin/false -u 115 colord' returned error code 1. Exiting. dpkg: error processing colord (--configure): subprocess installed post-installation script returned error exit status 1 Setting up whoopsie (0.1.32) ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /nonexistent -g whoopsie -s /bin/false -u 115 whoopsie' returned error code 1. Exiting. dpkg: error processing whoopsie (--configure): subprocess installed post-installation script returned error exit status 1 Setting up lightdm (1.2.1-0ubuntu1) ... Adding system user `lightdm' (UID 115) ... Adding new user `lightdm' (UID 115) with group `lightdm' ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /var/lib/lightdm -g lightdm -s /bin/false -u 115 lightdm' returned error code 1. Exiting. dpkg: error processing lightdm (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of ubuntu-desktop: ubuntu-desktop depends on lightdm; however: Package lightdm is not configured yet. dpkg: error processing ubuntu-desktop (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: colord whoopsie lightdm ubuntu-desktop what can i do now ?

    Read the article

  • Overriding Apache auth directive

    - by Machine
    Hi! I'm trying to allow public access to a method that generates a WSDL-file for our API. The rest of the site is behind basic auth protection. Can you guys take a look at the following virtual-host configuration and see why the override does not take place? <VirtualHost *:80> ServerName xyz.mydomain.com DocumentRoot /var/www/dev/public <Directory /var/www/dev/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all SetEnv APPLICATION_ENV testing </Directory> <Location /> AuthName "XYZ Development Server" AuthType Basic AuthUserFile /etc/apache2/xyz.passwd Require valid-user </Location> <Location /api/soap/wsdl> Satisfy Any allow from all </Location> </VirtualHost>

    Read the article

  • How to set up virtual users in vsftpd?

    - by ares94
    I've read this tutorial: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ My configuration is as follow: ---vsftpd.conf--- listen=YES anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES write_enable=YES connect_from_port_20=YES pam_service_name=vsftpd guest_enable=YES user_sub_token=$USER local_root=/var/www/sites/$USER chroot_local_user=YES hide_ids=YES ---/etc/pam.d/vsftpd--- auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so I created file /etc/vsftpd/passwd and added users using htaccess. I tried to login but it didn't work: ftp 127.0.0.1 Connected to 127.0.0.1 (127.0.0.1). 220 vsFTPd 2.3.5+ (ext.1) ready... Name (127.0.0.1:root): user1 331 Please specify the password. Password: 530 Permission denied. Login failed. Everything seems fine accept the permission denied thing. How can I fix this?

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Transferring DHCP using Windows Server Migration Tool - Why is Powershell is crashing on the import of the .mig file?

    - by Mike
    I am migrating DHCP from a windows server 2003R2 DC to a Windows Server 2008R2 DC I've followed this video and its predecessor (Installing Windows Server Migration Tools) http://technet.microsoft.com/en-us/video/migrating-dhcp-using-the-windows-server-2008-r2-migration-tools.aspx I went through everything smoothly until the last step. I have exported a .mig file with my DHCP configuration on the old 2003r2 server. I transferred this .mig file over to my 2008R2 server, when running the import command, it will appear to work for a minute or two and then I get a generic windows "Powershell has stopped working" error and I have to close the program. Under the problem details I see the following: FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.AccessViolationException OutermostExceptionType: System.AccessViolationException DeepestPowerShellFrame: unknown OS Version: 6.1.7600.2.0.0.272.7 LocaleID: 1033 Seems like there are permissions issues maybe? I am running powershell as an admin and am logged in to the server as a domain administrator. Any Ideas? Thanks

    Read the article

  • Alternatives to Citrix GoToAssist ?

    - by Evan Carroll
    Citrix GoToAssist is a really nifty little web application for customer support that allows you to take control of someones OSX, or Windows machine. Essentially, it works likes this: You log in to your management console You get a code You give them a code, and a website (fastsupport.com) They go there and enter in the code They accept the browser applet which installs a program on their computer You have control of their desktop You can see their desktop, configure applications, etc. They can also see when you disconnect. It is really rather nifty, but it doesn't support Linux and it is rather expensive (660$ a year). Does anyone know of any alternatives to this? I'm looking for a solution as simple on the user as this one, that doesn't require firewall configuration or setting up ssh/vnc/rdesktop etc.

    Read the article

  • Should I impersonate PHP via FastCGI?

    - by AKeller
    I am installing the latest version of PHP onto IIS 7.5 via FastCGI, and all of the instructions say that FastCGI should impersonate the calling client by setting fastcgi.impersonate = 1 If my website will have this configuration dedicated application pool application pool identity of ApplicationPoolIdentity anonymous authentication only (as IUSR) why do I want to impersonate? I come from an ASP.NET background, where the IUSR gets read-only permissions and the application pool identity gets any write permissions. Giving write access to the IUSR usually opens the door for WebDAV vulnerabilities. So I hesitate to let PHP run as the IUSR. I can't find many people asking this question (1 | 2) so I think I must be missing something. Can someone clarify this for me?

    Read the article

  • CentOS - Yum doesn't update anymore?

    - by Xanathos
    I've been trying to use yum now, but for some reason, not even the search work anymore. I even tried putting packages I already downloaded in the search criteria and is the same. [root@AMDFX03 Downloads]# yum search glibc Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile epel/metalink | 22 kB 00:00 * base: centos.secrel.com.br * epel: archive.linux.duke.edu * extras: centos.secrel.com.br * rpmforge: apt.sw.be * updates: centos.secrel.com.br adobe-linux-x86_64/primary | 1.2 kB 00:00 http://linuxdownload.adobe.com/linux/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from adobe-linux-x86_64: [Errno 256] No more mirrors to try. This error always appear no matter what I do. Please, can you tell me how to fix this, or at least how to reset yum's configuration?

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

  • What should I do when the GETDEB repository is down?

    - by 13east
    I'm trying to install programs/files from the getdeb repo but get a "check your internet connection" error. Programs than I've tried include Qbittorrent and Mozilla-Plugin-VLC. Through the terminal I get the following errors: Err http://archive.getdeb.net/ubuntu/ natty-getdeb/apps qbittorrent amd64 2.8.2-1~getdeb2 Could not connect to archive.getdeb.net:80 (209.105.191.78). - connect (113: No route to host) E: Failed to fetch http://archive.getdeb.net/ubuntu/pool/apps/q/qbittorrent/qbittorrent_2.8.2-1~getdeb2_amd64.deb: Could not connect to archive.getdeb.net:80 (209.105.191.78). - connect (113: No route to host) & Err http://archive.getdeb.net/ubuntu/ natty-getdeb/apps mozilla-plugin-vlc amd64 1.1.10-1~getdeb1 Could not connect to archive.getdeb.net:80 (209.105.191.78). - connect (113: No route to host) E: Failed to fetch http://archive.getdeb.net/ubuntu/pool/apps/v/vlc/mozilla-plugin-vlc_1.1.10-1~getdeb1_amd64.deb: Could not connect to archive.getdeb.net:80 (209.105.191.78). - connect (113: No route to host) I'm not having any problems adding/removing other programs/files from my machine. Is there a problem with GETDEB repo (and if so, is there any way to fix it on my end) or is there something wrong with my computer's configuration?

    Read the article

  • Problem accessing MICROSOFT##SSEE database (Error: 18456, Severity: 14, State: 16.)

    - by Philipp Schmid
    After an unexpected server shutdown due to a power failure, I can no longer connect to the internal windows database MICROSOFT##SSEE which is hosting Central Admin for my SBS 2008 server. The log shows: Error: 18456, Severity: 14, State: 16. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. [CLIENT: <named pipe>] I've tried to connect using the SQL Management studio (connecting to .pipemssql$microsoft##sseesqlquery) but no luck. The SQL Server Configuration Manager doesn't show a entry for 'Protocols for MICROSOFT##SSEE' (but shows it for 2 other database hosted on the same SQL server 2005 Express edition. I have tried to restore the master.ldf and mastlog.log files from a backup, but the issue persists.

    Read the article

  • What is the default value for Empty Temporary Internet Files when browser is closed in IE8?

    - by schellack
    We have four different machines that all have "Empty Temporary Internet Files when browser is closed" set to true (checked) in IE8's Internet Options (located under the Security section in the Advanced tab). No one remembers checking that checkbox to turn on the setting. What is the default value supposed to be? I'm specifically interested in Windows 7 and Windows XP. I have run rsop.msc on one of the corporate machines—3 of the 4 are members of a corporate network/domain—and see this under User Configuration, which makes the current scenario seem even stranger: The Local Group Policy Editor (gpedit.msc) also shows the Configure Delete Browsing History on exit setting to be Not configured (under Computer ConfigurationAdministrative TemplatesWindows ComponentsInternet ExplorerDelete Browsing History).

    Read the article

  • startx error no desktop manager

    - by WikiWitz
    I have Backtrack 5R2 KDE. I started recovery mode and did a failsafe xorg configuration. After that, I cannot load the KDE manager when I enter the startx command after logging in. Whenever I do a startx command (as root), the result resembles the following: This is not the actual output (I just drew this with MS paint because I cannot do a printscreen). The screen is just black with the icon in the upper left corner. The other pop-up menu appears when left-clicking the mouse. I tried the cp xorg.conf.failsafe xorg.conf advice from other websites with no luck. I have also tried the 'reconfigure option(s)' form the recovery mode with no success.

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • RandR 1.4 Optimus Dual Monitor

    - by mathepic
    So, I have a dual monitor setup. HDMI comes in through Nvidia, main display is through Intel (I think). I want to use XMonad with the dual setup, and I want to be able to run with or without the second monitor. Is this even doable? I'm using RandR 1.4 and can get both monitors to display something at the same time (by messing with xrandr) but XMonad can never detect more than one rectangle form Xinerama. Does anyone have a working multi-monitor xinerama or twinview configuration that works with optimus/randr 1.4?

    Read the article

  • What software is available to keep track of hundreds of servers?

    - by djangofan
    What good software is available (free or not free) to help me keep track of information relating to hundreds of servers, their relationships to each other (parent/child, category, type), and information on connecting to them, as well as possibly showing a picture or grid of some kind that allows me to report these relationships and key information to my supervisor. I am trying to avoid the "spreadsheet solution" or "visio solution" because I want to share this information and make changes with other persons in my server team. In other words, the solution I am looking for is a cross between a spreadsheet solution and a visio solution, providing both graphing and configuration information WITHOUT monitoring, and in a consistent format.

    Read the article

  • In Varnish, is it normal for the number of freed bytes to be 60% of those allocated?

    - by user331397
    I have an installation of Varnish 3.02 on an Amazon EC2 Medium Linux instance in front of two relatively low-traffic websites. After an uptime of 2 hours, there are 3400 objects in the cache. Using varnishstat, I checked the variables SMA.s0.c_bytes and SMA.s0.c_freed, which I assume correspond to the total number of bytes allocated since startup and the number freed, respectively. No objects should have had time to expire during these two hours, but still about 60% of the memory allocated since startup (330MB out of 560MB) has already been freed. Do you know if this is normal? If not, do you know what kind of configuration could be wrong?

    Read the article

  • .VBS scripts have stopped running... No idea why!

    - by Django Reinhardt
    We have two .vbs scripts that are run by our Task Scheduler that have suddenly stopped working for no reason we can fathom. We haven't significantly altered our system configuration in the last 24 hours, and the scripts have run without a hitch for months. According to the Task Scheduler the scripts just keep running and never stop, which is never the case. I stopped all running versions through the Scheduler and manually attempted to run one of the .vbs scripts. I got the following error message: Line: 15 Error: The system cannot locate the resource specified. Code: 800C0005 Source: msxml3.dll Line 15 (or 16 to be more accurate - line 15 itself is blank, but so is line 1) is: xml.Send Would could have suddenly caused this? Looking in system32\ and sysWOW64\ shows that msxml3.dll exists. Anybody got any ideas? Thanks a lot!

    Read the article

  • VPS hosting for a social network

    - by Jana
    Hi, I've developed a social network and I've been using shared hosting for that since it was launched. With that I wasn't able to send emails in bulk in cases like "newsletters" and "invitations to join my site". Plus most importantly most of the mails I send ended up in user's SPAM list.I'm planning to move into VPS as it may not have limits added. I'm wondering what's the cheapest VPS host available. I'm not pretty much familiar with Linux commands and seeking cPanel to do the work for me. Will the following configuration suit for a "new" social network like mine which has a less load? 1000Mhz Guaranteed 512MB Guaranteed RAM 20GB (RAID) Disk Space 1000GB/month Bandwidth 2 IP(s) & 5 Backups Semi Managed Thanks in advance

    Read the article

  • How to install PyQt on Mac OS X 10.6

    - by Albert
    I want to install PyQt. This seems kind of complicated to install on OS X. I haven't found any precompiled packages of it (are there any? I would really prefer those). So I downloaded PyQt. And SIP, because it depends on that. These files: http://www.riverbankcomputing.co.uk/static/Downloads/PyQt4/PyQt-mac-gpl-4.7.3.tar.gz http://www.riverbankcomputing.co.uk/static/Downloads/sip4/sip-4.10.2.tar.gz Did a python configure.py && make && sudo make install on SIP -- installed without any problems. Tried the same on PyQt -- and failed of course: /Library/Frameworks/QtCore.framework/Headers/qglobal.h:288:2: error: #error "You are building a 64-bit application, but using a 32-bit version of Qt. Check your build configuration." Ok, so I tried with python configure.py --use-arch=i386. Same error. Any idea?

    Read the article

  • Apache conf for high trafic CMS with backend users?

    - by Annan
    I'm in the situation where a website is going to have a high number of web users and a few backend webmasters. Webmasters will upload images (+other high mem tasks) and this bumps up the memory allocation of the httpd child processes to 100-150mb. In order to stop swapping I'm currently setting MaxClients in httpd.conf to 20. However this lowers maximum simultaneous requests. Will this be a problem when the website goes live? What is the best configuration? Info: Drupal 6, PHP 5, Apache 2.2 (Prefork atm) I'm thinking about Worker MPM, two apache instances or low MaxRequestsPerChild.

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

  • Thunderbird is unable to find server in DHCP controlled network.

    - by Skizz
    I have a network which consists of linux server and a combination of WinXP, Win7 and linux clients. All the systems are given dynamic IP addresses by the router which connects them all together. The server hosts an IMAP mail server. On Win7 and WinXP Thunderbird can access the IMAP server without any problems. On the linux client, using the same IMAP parameters, Thunderbird is unable to connect to the server. How do I get Thunderbird to find the server? I'm not sure if this is a linux system configuration problem or a Thunderbird issue. Additional note: The linux client is running Gnome, the server has a series of Samba shares defined. In the client, doing Places-Connect to Server and selecting Windows Share and specifying the server name, the Samba share is mounted OK.

    Read the article

  • Problem connecting to Ubuntu Server in same local network.

    - by frbry
    I have my LAN set up as below: 192.168.2.1: ADSL Router (DHCP Range: 192.168.2.2-192.168.2.250) 192.168.2.254: Wireless Access Point 192.168.2.253: Ubuntu Server (Static IP) 192.168.2.2: My Laptop (Connects to Internet via the Wireless AP) NAT in router is active and set up to transfer requests made over port 80 to 192.168.2.253. Router's firewall is inactive. No IPs in DMZ. My friends get Apache's It Works page when they try to enter http://my_external_ip. But I get Router's configuration page instead of that. What should I check or do? Thanks.

    Read the article

  • Protect individual sites on Ubuntu/Apache server

    - by Christoffer
    Hi,?? I need to set up a Apache server configuration for some client sites that run under the same Ubuntu 9.10 machine. All sites are allowed to run PHP, Python and Ruby on Rails. I do not control the source code of these sites and so I need to set up a filter in order to prevent one user to reach files on another users account.?? If I run a script to list files in "/" from one account, I can browse some files and directories in the actual server root. I want to set the root for each account to /var/usersite.com/www/ instead so that listing files in "/" shows the files in the client's root. ??How is this most easily configured??? Cheers!? /Christoffer

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >