Search Results

Search found 18475 results on 739 pages for 'auth to local'.

Page 148/739 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • ASP.NET MVC app on IIS7 with WebForms content is throwing NTLM authenticate box

    - by Jon
    I have an ASP.NET MVC app that also contains some WebForms content (for SSRS ReportViewer). This is deployed to IIS7 and the MVC pages of the app work fine, but when I try to browse to the aspx page I am prompted with the NTLM auth box. I do not have NTLM enabled, I only have Anonymous auth enabled. I have this deployed and fully working on an IIS6 box, the only other difference is that the IIS6 box is in our company domain, but the IIS7 box is not (I fail to see how this could be the issue as the MVC stuff is working fine). Any thoughts? Thanks.

    Read the article

  • How to install port versions of perl modules for perl5.14 in freebsd 9.0

    - by jm666
    Trying to use perl5.14 on Freebsd with port based p5-modules. uname -impr 9.0-RELEASE amd64 amd64 ALTQ delete all installed ports, start with a clean system # pkg_delete -a # rm -rf /var/db/pkg /var/db/ports /usr/local installing portmaster, checking /etc/make.conf (here is only WITHOUT_X11=YES). Now installing perl. # portmaster -g --force-config lang/perl5.14 # perl -v This is perl 5, version 14, subversion 2 (v5.14.2) built for amd64-freebsd-multi Now perl modules from the ports, # portmaster -g devel/p5-Moose #install Moose and its deps check with pkg_info and got zilion errors like: # pkg_info pkg_info: corrupted record (pkgdep line without argument), ignoring dpendecy check with portmaster - showing dependecies on perl5.12 #portmaster --check-depends Checking p5-Class-C3-0.24 ===>>> lang/perl5.12 is listed as a dependency ===>>> but there is no installed version ===>>> Delete this dependency data? y/n [n] when tried # perl-after-upgrade -f got: Fixed 0 packages (0 files moved, 0 files modified) In short: i got installed Moose into /usr/local/lib/perl5/site_perl/5.14.2/ but all its dependencies into /usr/local/lib/perl5/site_perl/5.12.4/ Yes, it is possible fix this with: # portmaster p5- what reinstall all installed p5-packages once again, now correctly for the 5.14 but it is terrible installing them twice... Questions: What is the correct way install p5-MODULES from ports with installed perl5.14 in an clean system? How to fix wrong dependency data on perl5.12 without the need install and reinstall them again What i'm doing wrong? Ps: know perlbrew and/or Local::lib - but for this case - want port versions.

    Read the article

  • BlueCoat reverse proxy NTLM authentication

    - by mathieu
    Currently when we want to access an internal site from Internet (IIS with NTLM auth), we have two login screens that appear : step1 : LDAPAuth, from the BlueCoat that check login/password validity against Active Directory step2 : NTLM auth, from our application. Is it possible to configure the reverse proxy to use the LDAP credentials provided at step1, and give them to whatever application that requests them ? Of course, if those credentials aren't valid, nothing happens. We're using BlueCoat SG400. Update : we're not looking for SSO where the user doesn't have to enter a password. We want the user to enter his domain credentials in the LDAPAuth dialog box, and the proxy to reuse it to authenticate against our application. Or any application that uses NTLM. We've only got 1 AD domain behind the reverse proxy.

    Read the article

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • jboss 4: enable UsersRolesLoginModule, where must users.properties files be placed?

    - by golemwashere
    I have an application (CQ5) that requires enabling unauthenticatedIdentity on jbossdir/conf/login-config.xml I used: <authentication> <login-module code = "org.jboss.security.auth.spi.UsersRolesLoginModule" flag = "required" > <module-option name="unauthenticatedIdentity">nobody</module-option> </login-module> </authentication> then I tried to copy jbossdir/conf/props/jmx-console-users.properties,jmx-console-roles.properties into users.properties and roles.properies (same dir). I still get this error: ERROR [org.jboss.security.auth.spi.UsersRolesLoginModule] Failed to load users/passwords/role files java.io.IOException: No properties file: users.properties or defaults: defaultUsers.properties found where should I put those files?

    Read the article

  • SharePoint Business Connectivity Services (BCS) Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by g18c
    I am running SharePoint 2010 with SQL 2012, I am trying to get Business Connectivity Services (BCS) running but I am facing a double-hope authentication issue. Everytime I try to connect to the external BCS list created in SharePoint designer, I get the error Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. In the event viewer on the SQL server I see a login failure for an anonymous user from the SP server IP address. Background information below: I have enabled Kerberos under SharePoint Central admin. I have the following AD domain accounts: SP_Farm - main website pool SP_Services - for SharePoint services (including BCS) SQL_Engine - SQL database engine I then created the following with SetSPN: SetSPN -S http/intranet mydomain\SP_Farm SetSPN -S http/intranet.mydomain.local mydomain\SP_Farm SetSPN -S SPSvc/SPS mydomain\SP_Farm SetSPN -S MSSQLSvc/SQL1 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1:1433 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local:1433 mydomain\SQL_DatabaseEngine I then delegated the AD accounts for any authentication protocol to the following: SP_Farm - SP_Farm (http service type, intranet) SP_Farm - SQL_DatabaseEngine (MSSQLSvc, sql1) SP_Service - SP_Service (SPSvc) SP_Service - SQL_DatabaseEngine (MSSQLSvc, sql1) I have also checked the WFE is being logged on to with Kerberos, with the WFE server event log showing event ID 4624 with Kerberos authentication, this is OK. The SQL is also showing connections authenticated as Kerberos from the WFE with the following query: Select s.session_id, s.login_name, s.host_name, c.auth_scheme from sys.dm_exec_connections c inner join sys.dm_exec_sessions s on c.session_id = s.session_id Despite the above, credentials are not passed from the client through the SharePoint server to the SQL server, only the anonymous account is used. I get the following error in the WFE server for 'BusinessData' ID 8080: Could not open connection using 'data source=sql1.mydomain.local;initial catalog=MSCRM;integrated security=SSPI;pooling=true;persist security info=false' in App Domain '/LM/W3SVC/1848937658/ROOT-1-129922939694071446'. The full exception text is: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. If I set a username and password with the Secure Store Service and set the external list to use the impersonated credentials, the list works. Any ideas what I have missed and what can be tried next?

    Read the article

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • User directive in nginx generates error despite running as UID root

    - by Joost Schuur
    I'm running nginx on a MacOS X machine, installed with brew, and when I launch nginx, even with sudo, I get the following warning in my log file over and over again: 4/21/11 2:03:42 AM org.nginx[3788] nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/conf/nginx.conf:2 From nginx.conf: user jschuur staff; I'm already launching nginx with sudo, since I want the thing to listen on port 80. Shouldn't that be enough to give it the proper super user privileges? The nginx binary as it's installed: jschuur@Glenna:sbin ? master ls -la total 4544 drwxr-xr-x 3 jschuur staff 102 Apr 12 20:53 . drwxrwxr-x 15 jschuur staff 510 Apr 12 15:25 .. -rwxr-xr-x 1 jschuur staff 2325648 Apr 12 20:39 nginx FWIW, I recompiled the binary to set passenger up and moved it around from it's original location into /usr/local/sbin. Update: As it turns out MacOS X was restarting nginx after I'd stopped it, because the launchd plist in ~/Library/LaunchAgents had set it to 'KeepAlive'. However, because I installed this plist into my local user's LaunchAgents folder as opposed to /Library/LaunchAgents (or better yet /Library/LaunchDaemons, which run before you even log on), it wasn't executed as root. Because of an error about not having permissions to use port 80, it actually exited right away, but still wrote to the same log file as the nginx process I started with sudo. I had thought the errors stemming from the automatic restart were actually coming from my manual restart via sudo. So, bottom line, problem solved. The real problem here was the homebrew instructions specifically asking you to install the plist file into an area that wouldn't allow a local site to use port 80.

    Read the article

  • Cannot connect to postgres installed on Ubuntu

    - by Assaf
    I installed the Bitnami Django stack which included PostgreSQL 8.4. When I run psql -U postgres I get the following error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? PG is definitely running and the pg_hba.conf file looks like this: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all md5 # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 md5 What gives? "Proof" that pg is running: root@assaf-desktop:/home/assaf# ps axf | grep postgres 14338 ? S 0:00 /opt/djangostack-1.3-0/postgresql/bin/postgres -D /opt/djangostack-1.3-0/postgresql/data -p 5432 14347 ? Ss 0:00 \_ postgres: writer process 14348 ? Ss 0:00 \_ postgres: wal writer process 14349 ? Ss 0:00 \_ postgres: autovacuum launcher process 14350 ? Ss 0:00 \_ postgres: stats collector process 15139 pts/1 S+ 0:00 \_ grep --color=auto postgres root@assaf-desktop:/home/assaf# netstat -nltp | grep 5432 tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 14338/postgres tcp6 0 0 ::1:5432 :::* LISTEN 14338/postgres root@assaf-desktop:/home/assaf#

    Read the article

  • Using AT on Ubuntu to Background Downloads (w/ Queue)

    - by Nicholas Yost
    I am writing a PHP script, but I want to use the AT command in Ubuntu to fetch a remote file via WGET. I'm basically looking to background the process, so PHP can finish fairly quickly. I cannot find any questions on here about how to use both, but I basically want to do the following pseudo-code: <?php exec('at now -q queuename wget http://path.to/remote/file.ext'); ?> Additionally, I'd like to queue this between providers. I'd like to have each path.to have its own queue, so I only download one file from each provider at a time. Meaning: <?php exec('at now -q remote wget http://path.to/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); exec('at now -q vendortwo wget http://vendor.two/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); ?> This should download the files from path.to, vendor.one, vendor.two immediately, and when the first file is finished downloading from vendor.one, it starts the second file. Does that make sense? I can't find anything like this anywhere on the web, much less on SO/SF. If we can use the crontab to run a one-off wget command, thats fine too.

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • In Fedora, Perl program cannot find Time::Piece library

    - by Eric Leschinski
    I have a Perl program named /usr/bin/octbatch running as a script on Fedora 17 Linux. When I run this command: /usr/bin/octbatch I get the error: Can't locate Time/Piece.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at /usr/bin/octbatch line 6. BEGIN failed--compilation aborted at /usr/bin/octbatch line 6. Here is the relevant lines of the Perl script: #!/usr/bin/perl -wT $ENV{PATH} = "/bin:/usr/bin:/usr/local/bin"; use strict; use POSIX qw(setsid :sys_wait_h); use Time::Piece; use Time::Local; I have to install Piece.pm so perl can find it. I've already installed it with this command (using the defaults): /usr/bin/perl -MCPAN -e install Time::Piece I have the Piece.pm file in /home/el/perl5/lib/perl5/x86_64-linux-thread-multi/ however when I run the octbatch command I get the same error as above. Like it can't even find it. Here is my PERL5LIB variable: el@defiant ~/gnuoctbluehost/single_stock_analysis $ env | grep PERL5 PERL5LIB=/home/el/perl5/lib/perl5/x86_64-linux-thread-multi:/home/el/perl5/lib/perl5 And the Piece.pm is located under /home/el/perl5/lib/perl5/x86_64-linux-thread-multi So my question is, Why is it not finding my Piece.pm file? And what are the ways I can get the @INC variable to include it. Or how do I make perl see it?

    Read the article

  • Login problems on SQL EXPRESS using a user

    - by meep
    Hello Serverfault. First time I set up a SQL server, so I hope you can help me out. I have a problem regarding logging in using SQL auth on my SQL EXPRESS 2008. I have added a user though the management interface as you can see on the image below. But as soon as I try to login using SQL auth I get an error the login failed for the user. The server log says: Login failed for user 'zebisgaard'. Reason: Could not find a login matching the name provided. [CLIENT: <named pipe>] Error: 18456, Severity: 14, State: 5. Do you have an idea why? I have triple checked that the username/password is correct, tried to recreate the user and so much more. And all this is localhost.

    Read the article

  • Admin access when forms authentication on sharepoint

    - by user33760
    Hi everyone, I'm fairly new to Sharepoint, i'm catching up fast (reading + experimenting) but i can't seem to get around this.... I have a web application with its respective site collection and sites, i have anonymous access allowed for all the sites with forms authentication. Everything is working fine but i don't know how to login with the administrator account from the internet. With Windows auth you have the "login" link and you just have to use your admin credentials, how can i do that with forms auth?? Any help will be highly appreciated. Thanks in advance.

    Read the article

  • How to properly create a startup script for tracd on Synology DS209+II?

    - by Daren Thomas
    I'm running tracd on a Synology DS209+II NAS. For that purpose, I have created a script in /opt/etc/init.d called S81trac: myserver> ls -l /opt/etc/init.d -rwxr-xr-x 1 root root 127 May 19 09:56 S80apache -rwxr-xr-x 1 root root 122 Jun 10 10:17 S81trac This file has following contents: #!/bin/sh # run tracd /opt/bin/tracd -p 8888 -auth=*,/volume1/svn/svn-auth-file,mydomain -e /volume1/trac-env And this actually works, except, the NAS never really finishes booting: The blue light keeps flashing. Also, reboot doesn't work anymore (it hangs) and I have to use killall init to reboot the machine. I have tried running tracd in the background, by appending & to the last line of S81trac. After rebooting, the blue light stops flashing. But ps | grep tracd is empty and I can't connect to the trac instance from my PC. I guess I'm doing something wrong here, but what?

    Read the article

  • JNDI Datasource Problem on Tomcat 6, Hibernate

    - by Asuman AKYILDIZ
    Hello; I am using Tomcat 6 as application server, Struts-Hibernate and MyEclipse 6.0. My application uses JDBC driver but I should modify it to use JNDI Datasource. I followed steps as described in tomcat 6.0 howto tutorial. I defined my resource in tomcatconf: <Resource name="jdbc/ats" global="jdbc/ats" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@//localhost:1521/MISDEV" username="TEST" password="TEST" maxActive="20" maxIdle="10" maxWait="-1" validationQuery="SELECT 1 from dual" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="false"/> I gave reference in my application web.xml: <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ats</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I defined datasource-dialect in my hibernate-cfg.xml <property name="connection.datasource">java:comp/env/jdbc/ats</property> <property name="dialect">org.hibernate.dialect.Oracle9Dialect</property> But when I create hibernate session, it can not open the connection: 09:18:11,322 ERROR JDBCExceptionReporter:72 - Connections could not be acquired from the underlying database! org.hibernate.exception.GenericJDBCException: Cannot open connection I also tried to set the properties at runtime: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); It does not open connection again. But, when I use JDBC driver it works: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.connection.url", "jdbc:oracle:thin:@//localhost:1521/MISDEV"); configuration.setProperty("hibernate.connection.username", "test"); configuration.setProperty("hibernate.connection.password", "test"); configuration.setProperty("hibernate.connection.driver_class", "oracle.jdbc.OracleDriver"); configuration.setProperty("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); I have been searching for 3 days and no success. What may be de problem?

    Read the article

  • Logstash agent doesn't run as a daemon on MAC OS X 10.9.1

    - by user329324
    I need to run the logstash agent as a Daemon on an MAC OS X System whenever the system boots up terminal: /usr/local/logstash/bin/logstash agent -f /usr/local/etc/cvlog.conf Per terminal the program is working succesfully but as an daemon it doesn't start. My com.bcd.logstash.plist <plist version="1.0"> <dict> <key>Label</key> <string>com.bcd.logstash</string> <key>KeepAlive</key> <dict> <key>SuccessfulExit</key> </false> </dict> <key>ProgramArguments</key> <array> <string>/usr/local/logstash/bin/logstash</string> <string>agent</string> <string>-f</string> <string>/usr/local/etc/cvlog.conf</string> </array> <key>RunAtLoad</key> </true> </dict> </plist> I start with: launchtl load /Library/LaunchDaemons/com.bcd.logstash.plist Syslog Error Message com.apple.launchd[1] (com.bcd.logstash[pid]): Exited with code:1 com.apple.launchd[1] (com.bcd.logstash[pid]): Exited with code:143 What's wrong with my plist?

    Read the article

  • Sendmail : Mail delivery of same domain to internal or external mail server.

    - by Silkograph
    Its bit difficult to explain but very simple problem. We have internal sendmail server and hosted server. Both are set to same domain name. We have mixed mail accounts. For example we have two user in one office. [email protected] is local only [email protected] is internal plus external. Internal means we create user on local linux box where sendmail is set. External means we create user on local and hosted server. [email protected] can send mails to any internal user created on Linux box where sendmail is installed. But he can not send mail to outside domain and no mail can be sent to him as there is no account created on external hosted server. [email protected] can send mails to internal as well as all other domains through sendmail's smart_host feature, which uses hosted server's smtp. [email protected] can get all external emails internally through Fetchmail on linux box. Now we have third user [email protected] who will be always outstation and can use external server only. So I can not create account on local linux box for [email protected] because his mail will get delivered locally only. I don't want to create alias and send his mails to gmail or yahoo's account. I want to send emails to his external account from any internal user. How this can be done? Thanks in advance.

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • Apache is running the wrong version of PHP.

    - by The Rook
    I am trying to enable CURL in php, and possibly update php as well. I have run into a road block where Apache seems to be running the wrong version of PHP. Here is some evidence. #lsof | grep php httpd 18397 nobody 135w REG 8,3 242 528537 /usr/local/apache/modules/libphp5.so I download the latest php 5.2.13 and build from source. I run a service httpd stop, do a make install and then I manually overwrite /usr/local/apache/modules/libphp5.so just to be safe. Using ldd i can see that its my fresh binary compiled with curl (the old one doesn't have curl.) ldd /usr/local/apache/modules/libphp5.so | grep curl libcurl.so.4 => /usr/local/lib/libcurl.so.4 (0x0064e000) I execute a phpinfo() and 5.2.3 is being executed instated of my new 5.2.13 and "curl" is nowhere to be found. What am I doing wrong? Why is curl disabled even though its statically linked? This is a RHEL 5.5 system.

    Read the article

  • Apache showing 500 error during Active Directory LDAP authentication

    - by Tyllyn
    I have Apache (on Windows Server) set up to authenticate one directory through Active Directory. Config settings are as follows: <LocationMatch "/trac/[^/]+/login"> Order deny,allow Allow from all AuthBasicProvider ldap AuthzLDAPAuthoritative Off AuthLDAPURL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN trac@<dc-redacted>.local AuthLDAPBindPassword "<password-redacted>" AuthType Basic AuthName "Protected" require valid-user </LocationMatch> Watching, Wireshark, I see the following get sent through when I visit the page: To the AD server: bindRequest(1) "trac@<dc-redacted>.local" simple And from the AD server: bindResponse(1) success I'm assuming this means that the auth was successful... but Apache doesn't think so. It returns a 500 server to me. Apache logs show the following: [Thu Nov 18 16:21:12 2010] [debug] mod_authnz_ldap.c(379): [client 192.168.x.x] [7352] auth_ldap authenticate: using URL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*), referer: http://192.168.x.x/trac/Trac/login [Thu Nov 18 16:21:12 2010] [info] [client 192.168.x.x] [7352] auth_ldap authenticate: user authentication failed; URI /trac/Trac/login [ldap_search_ext_s() for user failed][Filter Error], referer: http://192.168.x.x/trac/Trac/login Now, that log file shows a failed auth for a blank user. I am confused. Any idea what I am doing wrong... and how I can get the Apache authentication working? :) Thanks!

    Read the article

  • How do I delete hardlinks, symbolic links, junction points, etc please?

    - by jonny
    I could be wrong, but I'm yet to hear a valid argument for the exploitability that these things deliver...outweighing their very dubious / debatable functionality. They seem to me to be marginally handy, but I don't think I have any need for them. I do have a need for security, however. How can I delete their entire functionality permanently from my hard drive, please? Microsoft only has pages on how to create them; which seems almost peculiar to the point of being dubious (at least, to me...) And just a dumb command line question, am I correct in assuming fsutil hardlink list c: will enumerate every single hardlink on that drive? C:\Windows\system32>fsutil hardlink list c: \Windows\System32 Also, how do I delete symbolic links please ;) But I'd just rather have all symbolic linking and recursion-creating stuff removed, if that's possible? C:\Windows\system32>fsutil behavior query symlinkevaluation Local to local symbolic links are enabled. Local to remote symbolic links are enabled. Remote to local symbolic links are disabled. Remote to remote symbolic links are disabled.

    Read the article

  • Playing iPad to iPad Wifi games over PPTP VPN

    - by Pez Cuckow
    I'm trying to use a VPN to play iPad to iPad Wifi (Local) games over the internet. Normally you open the game on both iPad's, connect to the same Wifi point and they can "see" and speak to each other. I figure using a VPN I can put them both on the same network (either both on the VPN or one on the "real" network and one on the VPN). On my router I've set up PPTP VPN with the ip range 192.168.1.2-50, where the PC's on the real local network are assigned 192.168.1.100+ When I connect one of the iPads to the VPN, using an external WiFi network (BT Openzone) I can ping it as expected (from any machine on the local network). However the iPad's cannot "see" each other and none of the Wifi-Wifi games work. I've also tried connecting both iPad's to the same VPN, with the result the same. All machines on the local network (and those on the VPN) can ping the iPad's but none of the Wifi to Wifi games work. I've set both iPads to send all trafic over the VPN and if I check their external IP's they match that of the real network. Does anyone know how to fix this? And/or what is causing it? Or what further debug information I can provide? Note: I don't feel this is iPad specific so would prefer if this isn't moved to a Apple SuperUser equivalent

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >