Search Results

Search found 5444 results on 218 pages for 'svn verify'.

Page 98/218 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • A Tale of Identifiers

    Identifiers aren't locators, and they aren't pointers or links either. They are a logical concept in a relational database, and, unlike the more traditional methods of accessing data, don't derive from the way that data gets stored. Identifiers uniquely identify members of the set, and it should be possible to validate and verify them. Celko somehow involves watches and taxi cabs to illustrate the point.

    Read the article

  • A Tale of Identifiers

    Identifiers aren't locators, and they aren't pointers or links either. They are a logical concept in a relational database, and, unlike the more traditional methods of accessing data, don't derive from the way that data gets stored. Identifiers uniquely identify members of the set, and it should be possible to validate and verify them. Celko somehow involves watches and taxi cabs to illustrate the point.

    Read the article

  • What Are Link Tools and How Do You Use Them?

    Link tools are a broad category of tools that cover the discovery, analysis and reporting on of a range of website links (e.g. backlinks, outbound links, internal site links etc). They can be used to check which sites are linking to your (or your competitors!) website, if people are linking back to you or simply to verify your own internal linking structure is working correctly.

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • Archiving SQL Server Data Using Partitioning

    Many companies now have a requirement to keep data for long periods of time. While this data does have to be available if requested, it usually does not need to be accessible by the application for any current transactions. Data that falls into this category are a good candidate for archival. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • ODEE Green Field (Windows) Part 2 - WebLogic

    - by AndyL-Oracle
    Welcome back to the next installment on how to install Oracle Documaker Enterprise Edition onto a green field environment! In my previous post, I went over some basic introductory information and we installed the Oracle database. Hopefully you've completed that step successfully, and we're ready to move on - so let's dive in! For this installment, we'll be installing WebLogic 10.3.6, which is a prerequisite for ODEE 12.3 and 12.2. Prior to installing the WebLogic application server, verify that you have met the software prerequisites. Review the documentation – specifically you need to make sure that you have the appropriate JDK installed. There are advisories if you are using JDK 1.7. These instructions assume you are using JDK 1.6, which is available here. The first order of business is to unzip the installation package into a directory location. This procedure should create a single file, wls1036_generic.jar. Navigate to and execute this file by double-clicking it. This should launch the installer. Depending on your User Account Control rights you may need to approve running the setup application. Once the installer application opens, click Next. Select your Middleware Home. This should be within your ORACLE_HOME. The default is probably fine. Click Next. Uncheck the Email option. Click Yes. Click Next. Click Yes Click Yes and Yes again (yes, it’s quite circular). Check that you wish to remain uninformed and click Continue. Click Custom and Next. Uncheck Evaluation Database and Coherence, then click Next. Select the appropriate JDK. This should be a 64-bit JDK if you’re running a 64-bit OS. You may need to browse to locate the appropriate JAVA_HOME location. Check the JDK and click Next. Verify the installation directory and click Next. Click Next. Allow the installation to progress… Uncheck Run Quickstart and click Done.  And that's it! It's all quite painless - so let's proceed on to set up SOA Suite, shall we? 

    Read the article

  • Deploy and Test an Azure App with Platform Ready

    Microsoft Platform Ready provides technical and marketing resources for companies building applications for the Microsoft platform. Currently they are working with The Code Project on a promotion that will pay $250 USD to companies for their FIRST Windows Azure Application that is verified compatible using the Microsoft Platform Ready testing tools. The contest is valid only through 21 June 2011 12:00 PST in the US only, but the walkthrough I’m about to show will work for any company who wishes to confirm and verify to customers that their application is running correctly on Windows Azure.

    Read the article

  • I want to start using TDD. Any tips for a beginner?

    - by Mike42
    I never used an automated test mechanism in any of my projects and I feel I'm missing a lot. I want to improve myself, so I have to start tackling some issues I've been neglecting like this and trying Git instead of being stuck on SVN. What's a good way to learn TDD? I'll probably be using Eclipse to program in Java. I've heard of JUnit, but I don't know if there's anything else I should consider.

    Read the article

  • Data Mining: Part 14 Export DMX results with Integration Services

    In this chapter we will explain how to work with Data Mining models and the Integration Services. Specifically, we will talk about the Data Mining Query Task in SSIS. Free ebook "TortoiseSVN and Subversion Cookbook - Oracle Edition"Use these recipes to work better, faster, and do things you never knew you could do with SVN. If you're new to source control, this book provides a concise guide to getting the most out of Subversion. Download it for free.

    Read the article

  • Which devices is my app working on

    - by Woojah
    My team is developing an app that will work on about 100 (or more) different android devices. We are having trouble testing it since we are not sure how to verify if it works on all the different devices. Can anybody suggest some best practices, a testing framework, or some sort of way to give us feedback on how to test our app and/or get feedback from our users so they can tell us the problems they are having?

    Read the article

  • Protecting the SQL Server Backup folder

    I want to backup my SQL Server databases to a folder, but I want to minimize who has access to the folder. In other words, I want to make sure that members of the Windows Local Administrators group don't get to the backups without intentionally trying to bypass the security. How do I do that? Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    Hello there, We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log About to connect() to api.paypal.com port 443 (#0) Trying 66.211.168.123... * connected Connected to api.paypal.com (66.211.168.123) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • Error when trying to start Apache after installing SSL cert

    - by chris
    I am trying to install an SSL certificate, and I get the following errors: AH02241: Init: Unable to read server certificate from file /path/my.crt SSL Library Error: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag SSL Library Error: error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error (Type=X509) AH02312: Fatal error initialising mod_ssl, exiting. Here's the process I followed: I generated my private key with: openssl genrsa -out my.key 2048 I created the CSR with: openssl req -new -key my.key -out my.csr I provided the CSR to our IT department, and they returned a crt - it starts with -----BEGIN CERTIFICATE----- My ssl.conf has (my.example.com matches the Common name used during the generation of the CSR): <VirtualHost my.example.com:443> SSLEngine On ServerName my.example.com SSLCertificateFile /path/my.crt SSLCertificateKeyFile /path/my.key </VirtualHost> I do not have SSLCertificateChainFile or SSLCACertificate file set. The private key starts with ----BEGIN RSA PRIVATE KEY----- The csr starts with -----BEGIN CERTIFICATE REQUEST----- I have verified that both: openssl rsa -noout -modulus -in my.key openssl req -noout -modulus -in my.csr produce the same output. I cannot figure out how to verify the crt - trying both x509 and rsa produce an error. Should this process have worked? Can I verify that my.crt matches the key somehow?

    Read the article

  • How to set up the CNAME in DNS zone record to work with Unbounce

    - by Lirik
    I'm trying to run split testing on some landing pages I "designed" with Unbounce, but it requires that I set the CNAME record for my domain/sub-domain and I'm having trouble figuring out what is the right way to do it. My host is arvixe (www.arvixe.com) and their customer support has failed to help me for the past 5 days (I spoke to them multiple times). I followed the directions for setting the CNAME record and I was able to set the CNAME record, but I'm consistently unable to verify that the CNAME record is set up correctly. I followed the instructions on Unbounce to verify the CNAME record for my sub-domain (beta.devboost.com) and here are the results: No records found reverse lookup smtp diag port scan blacklist Reported by ns1.SNARE.arvixe.com on Thursday, November 10, 2011 at 5:49:57 PM (GMT-6) Here is my DNS zone record from the control panel of my host (last record, CNAME unbouncepages.com): Is there something wrong with my DNS Zone Record? What's the right way to do this? Update: I also have a CNAME record for beta in my root domain (devboost.com): I've updated my sub-domain record now: I've removed most of the other DNS records and I've removed the beta label for the CNAME record: Is that correct? Is there anything else I need to do?

    Read the article

  • How to avoid lftp Certificate verification error?

    - by pattulus
    I'm trying to get my Pelican blog working. It uses lftp to transfer the actual blog to ones server, but I always get an error: mirror: Fatal error: Certificate verification: subjectAltName does not match ‘blogname.com’ I think lftp is checking the SSL and the quick setup of Pelican just forgot to include that I don't have SSL on my FTP. This is the code in Pelican's Makefile: ftp_upload: $(OUTPUTDIR)/index.html lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" which renders in terminal as: lftp ftp://[email protected] -e "mirror -R /Volumes/HD/Users/me/Test/output /myblog_directory ; quit" What I managed so far is, denying the SSL check by changing the Makefile to: lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "set ftp:ssl-allow no" "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" Due to my incorrect implementation I get logged in correctly (lftp [email protected]:~>) but the one line feature doesn't work anymore and I have to enter the mirror command by hand: mirror -R /Volumes/HD/Users/me/Test/output/ /myblog_directory This works without an error and timeout. The question is how to do this with a one liner. In addition I tried: set ssl:verify-certificate/ftp.myblog.com no This trick to disable certificate verification in lftp: $ cat ~/.lftp/rc set ssl:verify-certificate no However, it seems there is no "rc" folder in my lftp directory - so this prompt has no chance to work.

    Read the article

  • curl FTPS with client certificate to a vsftpd

    - by weeheavy
    I'd like to authenticate FTP clients either via username+password or a client certificate. Only FTPS is allowed. User/password works, but while testing with curl (I don't have another option) and a client certificate, I need to pass a user. Isn't it technically possible to authenticate only by providing a certificate? vsftpd.conf passwd_chroot_enable=YES chroot_local_user=YES ssl_enable=YES rsa_cert_file=usrlocal/ssl/certs/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES Tested with curl -v -k -E client-crt.pem --ftp-ssl-reqd ftp://server:21/testfile the output is: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS handshake, CERT verify (15): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DES-CBC3-SHA * Server certificate: * SSL certificate verify result: self signed certificate (18), continuing anyway. > USER anonymous < 530 Anonymous sessions may not use encryption. * Access denied: 530 * Closing connection #0 * SSLv3, TLS alert, Client hello (1): curl: (67) Access denied: 530 This is theoretically ok, as i forbid anonymous access. If I specify a user with -u username:pass it works, but it would without a certificate too. The client certificate seems to be ok, it looks like this: client-crt.pem -----BEGIN RSA PRIVATE KEY----- content -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- content -----END CERTIFICATE----- What am I missing? Thanks in advance. (The OS is Solaris 10 SPARC).

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • CVSROOT problem because of username string

    - by jatanp
    Hi, I have always been SVN user but currently I have to use CVS as the source repository. I am quite new to CVS and really got confused many a times (reason being I always tried to access CVS like SVN !) However now I am really stuck in one problem wherein I am not able to do any cvs operations through cygwin. Actually I have checked out the code using WinCVS and while doing that it created the CVSROOT as following, :pserver;username=<user_name>;password=<pwd>:<serverip>:/cvs/repository However whenever I try to use cvs command in cygwin (after setting CVSROOT variable using export) it fails with following error: cvs update: Unknown option (`username') in CVSROOT. cvs update: in directory .: cvs update: ignoring CVS/Root because it does not contain a valid root. cvs update: Unknown option (`username') in CVSROOT. cvs [update aborted]: Bad CVSROOT: `:pserver;username=<user_name>;password=<pwd>:<serverip>:/cvs/repository'. However the command works fine, if invoked through dos command prompt. I got to know that on DOS prompt, cvs command is provided by CVSNT whereas in cygwin it's some different package. Please let me know where I have made a mistake and how it can be corrected ! I need cvs to work inside cygwin for some scripting purpose.

    Read the article

  • Problems configuring an SSH tunnel to a Nexentastor appliance for use with headless Crashplan

    - by Rob Smallshire
    Problem I am attempting to configure an SSH tunnel to a NexentaStor appliance from either a Windows or Linux computer so that I can connect a Crashplan Desktop GUI to a headless Crashplan server running on the Nexenta box, according to these instructions on the Crashplan support site: Connect to a Headless CrashPlan Desktop. So far, I've failed to get a working SSH tunnel from from either either a Windows client (using Putty) or a Linux client (using command line SSH). I'm fairly sure the problem is at the receiving end with NexentaStor. A blog article - CrashPlan for Backup on Nexenta - indicates that it could be made to work only after "after enabling TCP forwarding in Nexenta in /etc/ssh/sshd_config" - although I'm not sure how to go about that or specifically what I need to do. Things I have tried Ensuring the Crashplan server on the Nexenta box is listening on port 4243 $ netstat -na | grep LISTEN | grep 42 127.0.0.1.4243 *.* 0 0 131072 0 LISTEN *.4242 *.* 0 0 65928 0 LISTEN Establishing a tunnel from a Linux host: $ ssh -L 4200:localhost:4243 admin:10.0.0.56 and then, from another terminal on the Linux host, using telnet to verify the tunnel: $ telnet localhost 4200 Trying ::1... Connected to localhost. Escape character is #^]'. with nothing more, although the Crashplan server should respond with something. From Windows, using PuTTY have followed the instructions on the Crashplan support site to establish an equivalent tunnel, but then telnet on Windows gives me no response at all and the Crashplan GUI can't connect either. The PuTTY log for the tunnelled connection shows reasonable output: ... 2011-11-18 21:09:57 Opened channel for session 2011-11-18 21:09:57 Local port 4200 forwarding to localhost:4243 2011-11-18 21:09:57 Allocated pty (ospeed 38400bps, ispeed 38400bps) 2011-11-18 21:09:57 Started a shell/command 2011-11-18 21:10:09 Opening forwarded connection to localhost:4243 but the telnet localhost 4200 command from Windows does nothing at all - it just waits with a blank terminal. On the NexentaStor server I've examined the /etc/ssh/sshd_config file and everything seems 'normal' - and I've commented out the ListenAddress entries to ensure that I'm listening on all interfaces. How can I establish a tunnel, and how can I verify that it is working?

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Mplayer no sound when playing some movies

    - by Ivan Peevski
    Ok, that's a bit of a strange problem, that somehow crept into my system. It used to work fine. Here is the problem as far as I can identify it. When I try to play certain video files with mplayer, there is no sound. As far as I can tell, it is only an issue with ac3 and dts sound tracks (using the ffmpeg decoder). Mplayer says: ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 48000 Hz, 6 ch, s16le, 1536.0 kbit/33.33% (ratio: 192000->576000) Selected audio codec: [ffdca] afm: ffmpeg (FFmpeg DTS) ========================================================================== [AO_ALSA] Playback open error: Device or resource busy Failed to initialize audio driver 'alsa' Could not open/initialize audio device -> no sound. Audio: no sound (similar with ac3 sound, but using the ffac3 audio codec). Trying different audio output (-ao oss/pcm/sdl) doesn't fix the problem. The strange thing is that if I play these files directly with ffplay, they work fine. mplayer sound with mp3/ogg is fine My alsa configuration is standard (no /etc/asound.conf or ~/.asound*) OS: Linux Gentoo Mplayer: 1.0_rc4_p20100213 (SVN-r30554-4.3.4) FFMpeg: 0.5_p20601-r1 (SVN-r20601) Any other information I can provide?

    Read the article

  • openssl client authentication error: tlsv1 alert unknown ca: ... SSL alert number 48

    - by JoJoeDad
    I've generated a certificate using openssl and place it on the client's machine, but when I try to connect to my server using that certificate, I error mentioned in the subject line back from my server. Here's what I've done. 1) I do a test connect using openssl to see what the acceptable client certificate CA names are for my server, I issue this command from my client machine to my server: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -prexit and part of what I get back is as follow: Acceptable client certificate CA names /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] 2) Here is what is in the apache configuration file on the server regarding SSL client authentication: SSLCACertificatePath /etc/apache2/certs SSLVerifyClient require SSLVerifyDepth 10 3) I generated a self-signed client certificate called "client.pem" using mypos.pem and mypos.key, so when I run this command: openssl x509 -in client.pem -noout -issuer -subject -serial here is what is returned: issuer= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] subject= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=mlR::mlR/[email protected] serial=0E (please note that mypos.pem is in /etc/apache2/certs/ and mypos.key is saved in /etc/apache2/certs/private/) 4) I put client.pem on the client machine, and on the client machine, I run the following command: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -status -cert client.pem and I get this error: CONNECTED(00000003) OCSP response: no response sent depth=1 /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] verify error:num=19:self signed certificate in certificate chain verify return:0 574:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s3_pkt.c:1102:SSL alert number 48 574:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s23_lib.c:182: I'm really stumped as to what I've done wrong. I've searched quite a bit on this error and what I found is that people are saying the issuing CA of the client's certificate is not trusted by the server, yet when I look at the issuer of my client certificate, it matches to one of the accepted CA returned by my server. Can anyone help, please? Thank you in advance.

    Read the article

  • Kerberos authentication not working for one single domain

    - by Buddy Casino
    We have a strange problem regarding Kerberos authentication with Apache mod_auth_kerb. We use a very simple krb5.conf, where only a single (main) AD server is configured. There are many domains in the forest, and it seems that SSO is working for most of them, except one. I don't know what is special about that domain, the error message that I see in the Apache logs is "Server not found in Kerberos database": [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1025): [client xx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(714): [client xx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(625): [client xx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(640): [client xx.xxx.xxx.xxx] krb5_get_credentials() failed when verifying KDC [Wed Aug 31 14:56:02 2011] [error] [client xx.xxx.xxx.xxx] failed to verify krb5 credentials: Server not found in Kerberos database [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1110): [client xx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=401 user=(NULL) authtype=(NULL) When I try to kinit that user on the machine on which Apache is running, it works. I also checked that DNS lookups work, including reverse lookup. Who can tell me whats going?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >