Search Results

Search found 74153 results on 2967 pages for 'test and set'.

Page 21/2967 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • Ruby on Rails: What are partial hash arguments and full set arguments?

    - by williamjones
    I'm using asserts_redirected_to in my unit tests, and I'm receiving this warning: DEPRECATION WARNING: Using assert_redirected_to with partial hash arguments is deprecated. Specify the full set arguments instead. What is a partial hash argument, and what is a full set argument? These aren't terms that I've seen used in the Rails community before, and the only relevant results I can find on Google for these are in reference to this deprecation warning. Here is my code: assert_redirected_to :controller => :user, :action => :search also tried: assert_redirected_to({:controller => :user, :action => :search}) I might have guessed that it feels I'm missing some parameters or something like that, but the API documentation explicitly says that not all parameters need to be included: http://rails.rubyonrails.org/classes/ActionController/Assertions/ResponseAssertions.html

    Read the article

  • Set iPhone Style Location Based Alerts On Your Android Device With Google Now

    - by Gopinath
    Location based alerts of iPhone are very useful. You can set an alert to popup as soon as you reach a specific location like “Pickup milk and eggs” when I’m near a grocery store. This feature was missing in Android for a long time, but last week at Google I/O conference Google released an update to Google Now which supports location based alerts. To setup a location based alert 1. Launch Google Now 2. Type or say add reminder 3. By default it shows time based alert interface, switch it location based by touching Location icon 4. Set reminder text, choose a location and touch Set reminder 5. Your alert is set now and as soon as you are close by the specified location, you’ll see an alert on your device. This is a nice feature and I’m using it quite often for the past couple of days.  There are couple of things missing from the current version of Google Now location based alerts– recurring alerts and ability to set alerts on leaving a specific location. It is not possible to recur location based alerts. You will be alerted only once as soon as you reach the location and it is not possible to repeat the alert next time you visit the location. Lets say you want to be reminded to say hi to friend’s parents whenever you are travelling close by their home. It does not work. The second missing feature is something basic and some how Google did not incorporate in their first iteration. Lets say you are at office now and you want to set up alert to pickup flowers when you leave office. Sounds like a simple use case for location based alerts right? But there is no way to set this type of alerts. Google Now alerts you as soon as you reach a location, but not when you leave a location. Do you have an Android that supports Google Now? If so what are your thoughts on location based alerts?

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Regular expression test can't decide between true and false (JavaScript)

    - by nw
    I get this behavior in both Chrome (Developer Tools) and Firefox (Firebug). Note the regex test returns alternating true/false values: > var re = /.*?\bbl.*\bgr.*/gi; undefined > re /.*?\\bbl.*\\bgr.*/gi > re.test("Blue-Green"); true > re.test("Blue-Green"); false > re.test("Blue-Green"); true > re.test("Blue-Green"); false However, testing the same regex as a literal: > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true I can't explain this and it's making debugging very difficult. Can anyone explain this behavior?

    Read the article

  • What is a simple C library for a set of integer sets?

    - by conradlee
    I've got to modify a C program and I need to include a set of unsigned integer sets. That is, I have millions of sets of integers (each of these integer sets contains between 3 and 100 integers), and I need to store these in some structure, lets call it the directory, that can in logarithmic time tell me whether a given integer set already exists in the directory. The only operations that need to be defined on the directory is lookup and insert. This would be easy in languages with built-in support for useful data structures, but I'm a foreigner to C and looking around on Google did (surprisingly) not answer my question satisfactorily. This project looks about right: http://uthash.sourceforge.net/ but I would need to come up with my own hash key generator. This is a standard, simple problem, so I hope there is a standard and simple solution.

    Read the article

  • How do I wrap a repeatbale set of elements in jQuery?

    - by Globalz
    In jQuery how would I go about wrapping a repeatable set of elements with a div? For example I have: img h4 p img h4 p img h4 p I need to wrap each img, h4, p set with a div class="container". So it will look like: <div> class="container" img h4 p </div> <div> class="container" img h4 p </div> <div> class="container" img h4 p </div> Any ideas on how I can achieve this as its driving me nuts! I keep getting nested div.containers! Thanks!

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • Homoscedascity test for Two-Way ANOVA

    - by aL3xa
    I've been using var.test and bartlett.test to check basic ANOVA assumptions, among others, homoscedascity (homogeniety, equality of variances). Procedure is quite simple for One-Way ANOVA: bartlett.test(x ~ g) # where x is numeric, and g is a factor var.test(x ~ g) But, for 2x2 tables, i.e. Two-Way ANOVA's, I want to do something like this: bartlett.test(x ~ c(g1, g2)) # or with list; see latter: var.test(x ~ list(g1, g2)) Of course, ANOVA assumptions can be checked with graphical procedures, but what about "an arithmetic option"? Is that, at all, manageable? How do you test homoscedascity in Two-Way ANOVA?

    Read the article

  • JUnit Best Practice: Different Fixtures for each @Test

    - by Juri Glass
    Hi I understand that there are @Before and @BeforeClass, which are used to define fixtures for the @Test's. But what should I use if I need different fixtures for each @Test? Should I define the fixture in the @Test? Should I create a test class for each @Test? I am asking for the best practices here, since both solutions aren't clean in my opinion. With the first solution, I would test the initialization code. And with the second solution I would break the "one test class for each class" pattern.

    Read the article

  • Unit Testing in QTestLib - running single test / tests in class / all tests

    - by Dave
    I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button. All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes. I'm sure I'm missing something. I'd like to be able to easily: Run a single test method Run the tests in an entire class Run all tests Any of those would call the appropriate setup / teardown functions.

    Read the article

  • Perl unit test - start a tcp server & continue

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the tcp server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call "make test" after "perl Makefile.PL", this test starts & I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the "&" at the end to force it to start in background & then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • MS Test : How do I enforce exception message with ExpectedException attribute

    - by CRice
    I thought these two tests should behave identically, in fact I have written the test in my project using MS Test only to find out now that it does not respect the expected message in the same way that nunit does. nunit (fails): [Test, ExpectedException(typeof(System.FormatException), ExpectedMessage = "blah")] public void Validate() { int.Parse("dfd"); } ms test (passes): [TestMethod, ExpectedException(typeof(System.FormatException), "blah")] public void Validate() { int.Parse("dfd"); } No matter what message I give the ms test, it will pass. Is there any way to get the ms test to fail if the message is not right? Can I even create my own exception attribute? I would rather not have to write a try catch block for every test where this occurs.

    Read the article

  • Maven - Selenium - Possible to run only one test

    - by Jonas Söderström
    Hi We are using JUnit - Selenium for our web tests. We use Maven to start them and build a surefire report. The test suite is pretty large and takes a while to run and sometimes single tests fail because the browser won't start. I want to be able run a SINGLE test using maven so I retest the tests that fail and update the report. I can use mvn test -Dtest=TESTCLASSNAME to run all the tests in one test class, but this is not good enough since it takes about 10 minutes to run all the tests in our most complicated test classes and it's very likely that some other test will fail (because the browser wont start) and this will mess up my report. I know I can run one test from Eclipse but that is not what I am looking for. Any help on this would be very appriciated

    Read the article

  • Unit Test this - Simple method but don't know what's to test!

    - by user309705
    a very simple method, but don't know what's to test! I'd like to test this method in Business Logic Layer, and the _dataAccess apparently is from data layer. public DataSet GetLinksByAnalysisId(int analysisId) { DataSet result = new DataSet(); result = _dataAccess.SelectAnalysisLinksOverviewByAnalysisId(analysisId); return result; } All Im testing really is to test _dataAccess.SelectAnalysisLinksOverviewByAnalysisId() is get called! here's my test code (using Rhino mock) [TestMethod] public void Test() { var _dataAccess = MockRepository.GenerateMock<IDataAccess>(); _dataAccess.Expect(x => x.SelectAnalysisLinksOverviewByAnalysisId(_settings.UserName, 0, out dateExecuted)); var analysisBusinessLogic = new AnalysisLinksBusinessLogic(_dataAccess); analysisBusinessLogic.GetLinksByAnalysisId(_settings, 0); _dataAccess.VerifyAllExpectations(); } Let me know if you writing the test for this method what would you test against? Many Thanks!

    Read the article

  • Rails test across multiple environments

    - by DSimon
    Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases. Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe. The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.

    Read the article

  • adding org.springframework.test package to spring-2.5.6-SEC01.jar in netbeans

    - by John
    This is probably very simple, however I can't get this to work. I want to use AbstractDependencyInjectionSpringContextTest class which is in org.springframework.test package. This package is not included in Netbeans' spring library, so I want to add it. So what I have tried so far is: copy and paste "test" directory (downloaded from spring) into the Netbeans' spring-2.5.6-SEC01.jar file (copy it to org.springframework directory in that jar so I can use org.springframework.test to import it). If I go to project/libraries in Netbeans it is there, but when I try to import org.springframework.test.*; the autocomplition doesn't give me the option to choose test directory from org.soringframework package. create a new library which points to "test" directory and add it to the project- as there is no any jar file in "test" I'm not sure what path should I use to import it. I'm pretty sure this is something very simple but I'm still a novice and can't figure this out.

    Read the article

  • Cannot get principal id on my Spock test

    - by Ant's
    I have a controller like this : @Secured(['ROLE_USER','IS_AUTHENTICATED_FULLY']) def userprofile(){ def user = User.get(springSecurityService.principal.id) params.id = user.id redirect (action : "show", params:params) } I want to test the controller above controller in spock, so I wrote a test code like this: def 'userProfile test'() { setup: mockDomain(User,[new User(username:"amtoasd",password:"blahblah")]) when: controller.userprofile() then: response.redirectUrl == "/user/show/1" } When I run my test, this test fails with this error message : java.lang.NullPointerException: Cannot get property 'principal' on null object at mnm.schedule.UserController.userprofile(UserController.groovy:33) And in case of Integration test: class UserSpec extends IntegrationSpec { def springSecurityService def 'userProfile test'() { setup: def userInstance = new User(username:"antoaravinth",password:"secrets").save() def userInstance2 = new User(username:"antoaravinthas",password:"secrets").save() def usercontroller = new UserController() usercontroller.springSecurityService = springSecurityService when: usercontroller.userprofile() then: response.redirectUrl == "/user/sho" } } I get the same error as well. What went wrong? Thanks in advance.

    Read the article

  • Junit test that creates other tests

    - by Benju
    Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception. Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test. I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files. Any suggestions?

    Read the article

  • SSSD Authentication

    - by user24089
    I just built a test server running OpenSuSE 12.1 and am trying to learn how configure sssd, but am not sure where to begin to look for why my config cannot allow me to authenticate. server:/etc/sssd # cat sssd.conf [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss,pam domains = test.local [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 # Section created by YaST [domain/mose.cc] access_provider = ldap ldap_uri = ldap://server.test.local ldap_search_base = dc=test,dc=local ldap_schema = rfc2307bis id_provider = ldap ldap_user_uuid = entryuuid ldap_group_uuid = entryuuid ldap_id_use_start_tls = True enumerate = False cache_credentials = True chpass_provider = krb5 auth_provider = krb5 krb5_realm = TEST.LOCAL krb5_kdcip = server.test.local server:/etc # cat ldap.conf base dc=test,dc=local bind_policy soft pam_lookup_policy yes pam_password exop nss_initgroups_ignoreusers root,ldap nss_schema rfc2307bis nss_map_attribute uniqueMember member ssl start_tls uri ldap://server.test.local ldap_version 3 pam_filter objectClass=posixAccount server:/etc # cat nsswitch.conf passwd: compat sss group: files sss hosts: files dns networks: files dns services: files protocols: files rpc: files ethers: files netmasks: files netgroup: files publickey: files bootparams: files automount: files ldap aliases: files shadow: compat server:/etc # cat krb5.conf [libdefaults] default_realm = TEST.LOCAL clockskew = 300 [realms] TEST.LOCAL = { kdc = server.test.local admin_server = server.test.local database_module = ldap default_domain = test.local } [logging] kdc = FILE:/var/log/krb5/krb5kdc.log admin_server = FILE:/var/log/krb5/kadmind.log default = SYSLOG:NOTICE:DAEMON [dbmodules] ldap = { db_library = kldap ldap_kerberos_container_dn = cn=krbContainer,dc=test,dc=local ldap_kdc_dn = cn=Administrator,dc=test,dc=local ldap_kadmind_dn = cn=Administrator,dc=test,dc=local ldap_service_password_file = /etc/openldap/ldap-pw ldap_servers = ldaps://server.test.local } [domain_realm] .test.local = TEST.LOCAL [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false minimum_uid = 1 clockskew = 300 external = sshd use_shmem = sshd } If I log onto the server as root I can su into an ldap user, however if I try to console locally or ssh remotely I am unable to authenticate. getent doesn't show the ldap entries for users, Im not sure if I need to look at LDAP, nsswitch, or what: server:~ # ssh localhost -l test Password: Password: Password: Permission denied (publickey,keyboard-interactive). server:~ # su test test@server:/etc> id uid=1000(test) gid=100(users) groups=100(users) server:~ # tail /var/log/messages Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): system info: [Client not found in Kerberos database] Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): authentication failure; logname=LOGIN uid=0 euid=0 tty=/dev/ttyS1 ruser= rhost= user=test Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): received for user test: 4 (System error) Nov 24 09:36:44 server login[14508]: FAILED LOGIN SESSION FROM /dev/ttyS1 FOR test, System error server:~ # vi /etc/pam.d/common-auth auth required pam_env.so auth sufficient pam_unix2.so auth required pam_sss.so use_first_pass server:~ # vi /etc/pam.d/sshd auth requisite pam_nologin.so auth include common-auth account requisite pam_nologin.so account include common-account password include common-password session required pam_loginuid.so session include common-session session optional pam_lastlog.so silent noupdate showfailed

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >