Search Results

Search found 6020 results on 241 pages for 'valid'.

Page 168/241 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ]

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Trouble with Russian pc's on my wifi

    - by hogni89
    I have created a WiFi hotspot for the local community. The problem is, some Russian PC's (Windows XP, Windows Vista and Windows 7) can't get internet connection (We have a lot of bypassing russian fishing vessels / cargo ships). The pc's obtain a valid IP address, and some of them can even manage to send some few packages - But none of them are usable on the network. They all say "Limited internet access" or "???????????? ?????? ? ????????". The thing these PC's have in common is, that they all run a Russian installation of Windows. No one else has problems with the WiFi hotspot - Danish and English Windows, Linux and OS X all work like a charm. Can it be, that there is a difference between the danish / english windows installation, compared to the Russian installation? EDIT START They can't ping the router (One PC got one response - ONCE), they can't access any sites and Windows newer asks "Is this network public, home or work?". EDIT END PS: The hotspot is a airMAX rocket M from Ubiquiti Networks, Inc (www.ubnt.com)

    Read the article

  • Server 2003 and XP Client; Why are HTTP connections being silently dropped.

    - by Asa Yeamans
    On my network, my edge-router, a windows 2003 r2 server router with all the latest updates, will drop packets, but only under specific circumstances. I have troubleshot and isolated it down to the most simple configuration i can. There is NO NAT involved. Only fully-public IP addresses. No Firewalls are running either, all ahve been disabled. no packet filters on any interfaces anywhere either. I have a single Windows XP virtual machine and my edge-router(the windows 2003 r2 server, and also a virtual machine) running on a windows 2008 x64 r2 system (running virtual server 2005 as i dont have Intel-VT compatible chip yet). The edge router can access any external http site just fine, no issues. However the windows XP machine is only able to access certain sites. These work: www.google.com www.txstate.edu www.workintexas.com www.thedailywtf.com . These Dont: www.yahoo.com www.utexas.edu en.wikipedia.org slashdot.org www.bing.com. I have removed all possibility of DNS issues by connecting with net-cat from the XP box and sending GET /\r\nHost: \r\n\r\n and that connection replicates the issue as well. The network setup: My statically assigned IP block: x.x.x.168/29 DSL Modem -----PPPoE Connection---- x.x.x.169[EdgeRouter] [EdgeRouter]x.x.x.170 -----Virtual Ethernet----- x.x.x.174 [Test2] Test2's Default gateway is x.x.x.170 and test2 can ping any and every valid, accessible, public IP address with no packet loss what-so-ever. If i connect directly over PPPoE from test2 (the XP box) everything works just fine... Im at my wits end, i have NO IDEA whats causing this.

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • How do I log back into a Windows Server 2003 guest OS after Hyper-V integration services installs and breaks my domain logins?

    - by Warren P
    After installing Hyper-V integration services, I appear to have a problem with logging in to my Windows Server 2003 virtual machine. Incorrect passwords and logins give the usual error message, but a correct login/password gives me this message: Windows cannot connect to the domain, either because the domain controller is down or otherwise unavailable, or because your computer account was not found. Please try again later. If this message continues to appear, contact your system administrator for assistance. Nothing pleases me more than Microsoft telling me (the ersatz system administrator) to contact my system administrator for help, when I suspect that I'm hooped. The virtual machine has a valid network connection, and has decided to invalidate all my previous logins on this account, so I can't log in and remotely fix anything, and I can't remotely connect to it from outside either. This appears to be a catch 22. Unfortunately I don't know any non-domain local logins for this virtual machine, so I suspect I am basically hooped, or that I need ophcrack. is there any alternative to ophcrack? Second and related question; I used Disk2VHD to do the conversion, and I could log in fine several times, until after the Hyper-V integration services were installed, then suddenly this happens and I can't log in now - was there something I did wrong? I can't get networking working inside the VM BEFORE I install integration services, and at the very moment that integration services is being installed, I'm getting locked out like this. I probably should always know the local login of any machine I'm upgrading so I don't get stuck like this in the future.... great. Now I am reminded again of this.

    Read the article

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • DRBD stacked resources: recovering from failure

    - by Marcus Downing
    We're running a stacked four-node DRBD setup like this: A --> B | | v v C D This means three DRBD resources running across these four servers. Servers A and B are Xen hosts running VMs, while servers C and D are for backups. A is in the same datacentre as C. From server A to server C, in the first datacentre, using protocol B From server B to server D, in the second datacentre, using protocol B From server A to server B, different datacentres, stacked resource using protocol A First question: booting a stacked resource We haven't got any vital data running on this setup yet - we're still making sure it works first. This means simulating power cuts, network outages etc and seeing what steps we need to recover. When we pull the power out of server A, both resources go down; it attempts to bring them back up at next boot. However, it only succeeds at bringing up the lower-level resource, A-C. The stacked resource A-B doesn't even try to connect, presumably because it can't find the device until it's a connected primary on the lower level. So if anything goes wrong we need to manually log in and bring that resource up, then start the virtual machine on top of it. Second question: setting the primary of a stacked resource Our lower-level resources are configured so that the right one is considered primary: resource test-AC { on A { ... } on C { ... } startup { become-primary-on A; } } But I don't see any way to do the same with a stacked resource, as the following isn't a valid config: resource test-AB { stacked-on-top-of test-AC { ... } stacked-on-top-of test-BD { ... } startup { become-primary-on test-AC; } } This too means that recovering from a failure requires manual intervention. Is there no way to set the automatic primary for a stacked resource?

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Samba share will not connect (was working yesterday)

    - by David Gard
    I have a CentOS websver with a Samba share set up (\\webserver\websites). I was connected to this share just yesterday without issue, but today my Windows 8 PC will not connect to it. I've also tried making a connection from Windows 7 and Windows XP, all without success. I initially tried restarting my computer, but that did not work. I then tried restarting the Samba service on the webserver (service smb restart), and when that failed I restarted the webserver. All of that was to no avail, and I still cannot connect to the share. The webserver is contactable from my PC (and the others I tried), as the websites it hosts work fine and I'm able to Putty to the server. When connected to the webserver, I can see that Samba is running by using service smb status - service smb status smbd (pid 4685) is running... nmbd (pid 4688) is running... Can anyone please help me to get this share working? Here is my full Samba config (/etc/samba/smb.conf) - [global] workgroup = MYGROUP server string = Samba Server %v log file = /var/log/samba/log.%m max log size = 50 security = user encrypt passwords = yes socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 local master = no [websites] comment = Websites browseable = yes writable = yes path=/var/www/html/ valid users = dgard

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • How to connect to DB2 when the password ends with '!' in Windows

    - by AngocA
    I am facing a problem to use the DB2 tools when using generic account with a generated password which ends with the Bang sign '!' to connect to DB2 database. I am not allowed to change the password because it is already used by other processes. I know the user is valid and I can connect to the database with its credentials, but not from all db2 tools. When using the Control Center it is okay. When using the Command Editor (GUI) or the Command Windows, I got this error message: connect to WAREHOUS user administrator using ! SQL0104N An unexpected token "!" was found following "<identifier>". Expected tokens may include: "NEW". SQLSTATE=42601 Let's say that my password is: pass@! I am trying to use c:\>db2 connect to sample user administrator using "pass@!" or c:\>db2 connect to sample user administrator using pass@! And it both cases I got the same error message. I could change the way I connect but it is not useful for me, for example: c:\>db2 connect to sample user administrator Enter current password for administrator: But I cannot use it from a batch file easily. I would like to know how can I connect from the Command Editor, in order to use this user from the Graphical Tools. BTW, I know that the Control Center is deprecated.

    Read the article

  • Exchange 2010 certificate errors

    - by Frederik Nielsen
    I have a problem with my newly setup Exchange environment for our hosted customers. First off, when configuring the outlook client, it gives a certificate warning although the certificate has been bought and setup. I am using a setup like this: autodiscover.CUSTOMERDOMAIN.TLD CNAME autodiscover.exchange.COMPANYDOMAIN.TLD (Companydomain is our company that hosts the exchange servers, customerdomain being the customers domain) Shouldn't that work? I know that Microsoft does something like that for Office365, but I really don't think they buy a certificate for every customer.. So I guess some redirection should be setup somehow - any guidance? Next thing: When we accept that error, and move on to actually starting Outlook, it states that the certificate is not valid for the RPC proxy server exchange.COMPANYDOMAIN.TLD - this domain is not right, as that domain is not included in the certificate. I would instead like this domain to be mail.exchange.COMPANYDOMAIN.TLD I tried to run this script setting both internal and external URL's to be the same, with no luck. Any guidance on this one? I am running Exchange 2010 SP2, with CAS, HT and MBX split up on 3 different servers.

    Read the article

  • Virtual machine lost after power cut

    - by dannymcc
    We have just had a power issue and our ESX (ESXi 4.1.0) host lost power and then rebooted. All but one of the virtual servers have rebooted with no problem, however one of them refused to power up. I try to power it on and I get the following error: File <unspecified filename> was not found Reason: The system cannot find the file specified. Cannot open the disk '/vmfs/volumes/4e03076e-90834647-b846-001185c38f42/LAMP- Stack/turnkey-lamp-11.3-lucid-x86.vmdk' or one of the snapshot disks it depends on. VMware ESX cannot find the virtual disk "/vmfs/volumes/4e03076e-90834647-b846- 001185c38f42/LAMP-Stack/turnkey-lamp-11.3-lucid-x86.vmdk". Verify the path is valid and try again. I have logged into the ESX host to see if the file is there an have found only the following file that matches the filename: /vmfs/volumes/4e03076e-90834647-b846-001185c38f42/LAMP-Stack/turnkey-lamp-11.3-l ucid-x86-s001.vmdk I notice that the above file has '-s001' after the filename. Is this recoverable? Any help of advice is greatly appreciated! EDIT: Running ls -l on the directory that contains the file shows this: drwxr-xr-t 1 root root 1680 Feb 9 09:49 4e03076e-90834647-b846-001185c38f42 The databrowser file system looks like this: and in a different directory there is the file that matches the missing one:

    Read the article

  • Use autocomplete in dropdown cells with Excel 2007?

    - by Martin
    I want to make a survey with Excel and I therefore have defined the cells for the answers as a dropdown cell which only accepts answers from a certain list, e. g.: The two Lists List1 and List2 (yellow cells) are the possible answers for the questions in Block 1.x resp. 2.x (blue) . There might be a block 4 with more questions, which again use List1 for their possible answers. My problem is: I'd like to be able to use the autocompleate feature to fill in the blue cells with the dropdown menu, so that the user only types 5 and it automatically expands to "5: extremely important" or "5: extremely difficult". According to my research on the www, this should be possible if I add the list with possible answers directly above the cells where autocomplete should work (I did this with the green helper cells which could be hidden) . But I have to enter at least 4 characters 5: e to get the autocompleted suggestion. Is there a way to make autocomplete already replace a "5" by the corresponding valid term? As the survey file shall be distributed to a lot of people "outside", I can not use VBA magic because it may be blocked on their computer and might not work. EDIT: it seems to have to do with the numbers I use: If I'd start my List items with A, B, C instead of 1, 2, 3, it would work perfectly. Excel seems to ignore the pure numbers when they are entered and does not try to autocomplete them.. is there a workaround? (I hope it is clear what I want, it seems a little difficult to explain.)

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • One Windows Domain workstation can ping gateway but gets no internet access

    - by dindeman
    One of the (Windows XP SP3) workstations of our Windows Domain could not access internet anymore, this problem suddenly happened overnight. The domain controllers (there are three of them) are all running Windows Server 2008. First I compared the output of ipconfig /all on the faulty workstation with the output of a working workstation and it was just fine as it had always been. In particular the default gateway was correct and always remained pingable from the faulty workstation. I guessed that something was wrong with the DHCP service and I restarted the DHCP server service on all of our three DCs as well as the DHCP client service on the faulty workstation. This didn't solve the issue. I then thought of renewing the DHCP lease with ipconfig /release and ipconfig /renew and here is my first question: why did this never work? The same IP address (192.168.0.45) kept being assigned despite all my attempts to renew it (note that all our workstation are getting their TCP/IP automatically.) Even by leaving the domain and changing the computer name the same address was yet again obtained... Anyway I then proceeded to switch the TCP/IP configuration for that machine manually to another free valid IP address (192.168.0.41)... and then the internet access came back! I then cleared any traces of the previous IP in the DHCP leases list and in the DNS tables of our DCs and, after setting back the TCP/IP configuration to 'automatic', finally, the new lease would be granted (192.168.0.41) alongside with the internet access. My second question: what went suddenly wrong with the original IP address?

    Read the article

  • Why isn't Apache Basic authentication working?

    - by Brad
    I just upgraded Apache from it's 2003 build, to a squeaky-clean, brand-new 2.4.1 build. All seems pretty good except for one glaring thing: In my httpd.conf file I have the following: <Directory /> AllowOverride none Options FollowSymLinks AuthType Basic AuthName "Enter Password" AuthUserFile /var/www/.htpasswd Require valid-user </Directory> This should allow only users in the specified auth file to access the server - just as it had under the older version of Apache. (Right?) However, it's not working. Requests are granted with no authentication provided. When I switch logging to LogLevel Debug, for the accesses, it says: [Sat Mar 24 21:32:00.585139 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of Require all granted: granted [Sat Mar 24 21:32:00.585446 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of <RequireAny>: granted I really don't know what this means - and I (to the best of my knowledge) don't have any "Require all granted" or "" statements in any of my files. Any ideas why this isn't working, or where to debug??

    Read the article

  • Apache2 403 permission denied on Ubuntu 12.04

    - by skeniver
    I have a sub-directory in my /var/www folder called prod, which is password protected. It was all working fine until I asked my server admin to help me set up allow all access to one particular file. Now the entire folder is just giving me a 403 error. This is the sites-enabled file: <VirtualHost *:80> ServerAdmin [email protected] # Server name ServerName prod.xxx.co.uk DocumentRoot /var/www/prod <Directory /var/www/prod> Options Indexes FollowSymLinks MultiViews +ExecCGI Includes AllowOverride None Order allow,deny AuthType Basic AuthName "Please log in" AuthUserFile /home/ubuntu/.htpasswd Require valid-user </Directory> <Directory /var/www/prod/xxx/cgi-bin/api.pl> Allow from All Satisfy Any </Directory> ScriptAlias /xxx/cgi-bin/ /var/www/prod/xxx/cgi-bin/ ErrorLog ${APACHE_LOG_DIR}/prod.xxx.error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/prod.xxx.access.log combined </VirtualHost> Now he's unsure why this is blocking me out completely. No permissions have been changed, but this is the /var/www/ folder: 4 drwxr-xr-x 2 root root 4096 Jan 3 21:10 images 4 drwxr-sr-x 4 root www-data 4096 Mar 31 14:47 jslib 4 drwxr-xr-x 7 root root 4096 Jun 2 13:00 prod When I try to visit http://prod.xxx.co.uk, I don't get asked for the password; I just get 403'd I hope I've given enough information... Anyone able to spot something he can't?

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >