Search Results

Search found 1220 results on 49 pages for 'nathan pk'.

Page 2/49 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Access - how to derive a field upon entry of a record

    - by jonos
    I have an Access database for a media rental company which includes the following tables, among others; LOAN: customer_id (pk), loan_datetimeLeant (pk), loan_dateReturned (pk) LOAN_ITEMS: customer_id (pk), loan_datetimeLeant (pk), item_id (pk), loanItem_cost ITEM: item_id (pk), product_id, item_availability PRODUCT: product_id (pk), product_name, product_type MEDIA_COST: product_type (pk), product_cost So basically the 'product type' (DVD, VHS etc) determines the cost. The 'product id' determines what movie, platform etc each item is. My question is: When creating a form for the Loan_Items table, how can I populate the loanItem_cost field (so it is stored) whenever a new Loan_Item record is added? I need to store it as the cost of the item (product_cost) may change over time and I'd like to record what the customer paid at the time the loan was made. Thanks in advance

    Read the article

  • Is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    - by Tum
    This question is about good design practice in programming. Let see this example, we have 2 interrelated tables: Table1 textID - text 1 - love.. 2 - men... ... Table2 rID - textID 1 - 1 2 - 2 ... Note: In Table1: textID is auto_increment primary key In Table2: rID is auto_increment primary key & textID is foreign key The relationship is that 1 rID will have 1 and only 1 textID but 1 textID can have a few rID. So, when table1 got modification then table2 should be updated accordingly. Ok, here is a fictitious example. You build a very complicated system. When you modify 1 record in table1, you need to keep track of the related record in table2. To keep track, you can do like this: Option 1: When you modify a record in table1, you will try to modify a related record in table 2. This could be quite hard in term of programming expecially for a very very complicated system. Option 2: instead of modifying a related record in table2, you decided to delete old record in table 2 & insert new one. This is easier for you to program. For example, suppose you are using option2, then when you modify record 1,2,3,....,100 in table1, the table2 will look like this: Table2 rID - textID 101 - 1 102 - 2 ... 200 - 100 This means the Max of auto_increment IDs in table1 is still the same (100) but the Max of auto_increment IDs in table2 already reached 200. what if the user modify many times? if they do then the table2 may run out of records? we can use BigInt but that make the app run slower? Note: If you spend time to program to modify records in table2 when table1 got modified then it will be very hard & thus it will be error prone. But if you just clear the old record & insert new records into table2 then it is much easy to program & thus your program is simpler & less error prone. So, is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    Read the article

  • IF adding new Entity gives error me : EntityCommandCompilationException was unhandled bu user code

    - by programmerist
    i have 5 tables in started projects. if i adds new table (Urun enttiy) writing below codes: project.BAL : public static List<Urun> GetUrun() { using (GenoTipSatisEntities genSatisUrunCtx = new GenoTipSatisEntities()) { ObjectQuery<Urun> urun = genSatisUrunCtx.Urun; return urun.ToList(); } } if i receive data form BAL in UI.aspx: using project.BAL; namespace GenoTip.Web.ContentPages.Satis { public partial class SatisUrun : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { FillUrun(); } } void FillUrun() { ddlUrun.DataSource = SatisServices.GetUrun(); ddlUrun.DataValueField = "ID"; ddlUrun.DataTextField = "Ad"; ddlUrun.DataBind(); } } } i added URun later. error appears ToList method: EntityCommandCompilationException was unhandled bu user code error Detail: Error 1 Error 3007: Problem in Mapping Fragments starting at lines 659, 873: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 2 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND PK is in 'FaturaDetay' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 874 11 GenoTip.DAL Error 3 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'FaturaDetay' EntitySet AND PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 4 Error 3007: Problem in Mapping Fragments starting at lines 748, 879: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL Error 5 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND PK is in 'Satis' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 880 11 GenoTip.DAL Error 6 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'Satis' EntitySet AND PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL

    Read the article

  • Generate MERGE statements from a table

    - by Bill Graziano
    We have a requirement to build a test environment where certain tables get reset from production every night.  These are mainly lookup tables.  I played around with all kinds of fancy solutions and finally settled on a series of MERGE statements.  And being lazy I didn’t want to write them myself.  The stored procedure below will generate a MERGE statement for the table you pass it.  If you have identity values it populates those properly.  You need to have primary keys on the table for the joins to be generated properly.  The only thing hard coded is the source database.  You’ll need to update that for your environment.  We actually used a linked server in our situation. CREATE PROC dba_GenerateMergeStatement (@table NVARCHAR(128) )ASset nocount on; declare @return int;PRINT '-- ' + @table + ' -------------------------------------------------------------'--PRINT 'SET NOCOUNT ON;--'-- Set the identity insert on for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] ON; 'declare @sql varchar(max) = ''declare @list varchar(max) = '';SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + 's.' + [name] +', 'from sys.columnswhere object_id = object_id(@table)-- --------------------------------------------------------------------------------PRINT 'MERGE [dbo].[' + @table + '] AS t'PRINT 'USING (SELECT * FROM [source_database].[dbo].[' + @table + ']) as s'-- Get the join columns ----------------------------------------------------------SET @list = ''select @list = @list + 't.[' + c.COLUMN_NAME + '] = s.[' + c.COLUMN_NAME + '] AND 'from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE cwhere pk.TABLE_NAME = @tableand CONSTRAINT_TYPE = 'PRIMARY KEY'and c.TABLE_NAME = pk.TABLE_NAMEand c.CONSTRAINT_NAME = pk.CONSTRAINT_NAMESELECT @list = LEFT(@list, LEN(@list) -3)PRINT 'ON ( ' + @list + ')'-- WHEN MATCHED ------------------------------------------------------------------PRINT 'WHEN MATCHED THEN UPDATE SET'SELECT @list = '';SELECT @list = @list + ' [' + [name] + '] = s.[' + [name] +'],'from sys.columnswhere object_id = object_id(@table)-- don't update primary keysand [name] NOT IN (SELECT [column_name] from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @table and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME)-- and don't update identity columnsand columnproperty(object_id(@table), [name], 'IsIdentity ') = 0 --print @list PRINT left(@list, len(@list) -3 )-- WHEN NOT MATCHED BY TARGET ------------------------------------------------PRINT ' WHEN NOT MATCHED BY TARGET THEN';-- Get the insert listSET @list = ''SELECT @list = @list + '[' + [name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' INSERT(' + @list + ')'-- get the values listSET @list = ''SELECT @list = @list + 's.[' +[name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' VALUES(' + @list + ')'-- WHEN NOT MATCHED BY SOURCEprint 'WHEN NOT MATCHED BY SOURCE THEN DELETE; 'PRINT ''PRINT 'PRINT ''' + @table + ': '' + CAST(@@ROWCOUNT AS VARCHAR(100));';PRINT ''-- Set the identity insert OFF for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] OFF; 'PRINT ''PRINT 'GO'PRINT '';

    Read the article

  • t-sql - find differences for pk's in a single table (Self join?)

    - by Pace
    Hi Experts, My situation is this. I have a table of products with a pk "Parent" which has "Components" The data looks something like this Parent(PK) Component Car1 Wheel Car1 Tyre Car1 Roof Car2 Alloy Car2 Tyre Car2 Roof Car3 Alloy Car3 Tyre Car3 Roof Car3 Leather Seats Now what I want to do is some query that I can feed two codes in and see the differences... IE If I feed in "Car1", "Car2" it would return something like; Parent Component Car1 Wheel Car2 Alloy As this is the difference between the two. If I said "Car1", "Car3" I would expect; Parent Component Car1 Wheel Car3 Alloy Car3 Leather Seats Your help with this matter would be greatly appreciated.

    Read the article

  • If I drop my clustered PK and add a new one, what order will my rows be in?

    - by stack
    In SQL Server, I'm looking at TableA, which currently has a uniqueidentifier clustered primary key. The GUID has no meaning in any context. (I'll give you a second to clean up your keyboard and monitor and set down the soda.) I'd like to drop that primary key and add a new unique integer primary key to the table. My question is this: when I drop the index, modify the column from uniqueidentifier to int, and add the new clustered unique primary key to the modified column, will the new PK values be in the order of insertion into the table, or will they be in some other order? Is this the right way to go here? Will this work? (I'm kind of a noobkin with regard to table creation/modification.)

    Read the article

  • Getting field of type bytea in helper table when using GenerationType.IDENTITY

    - by dtrunk
    I'm creating my db scheme using Hibernate. There's a Table called "tbl_articles" and another one called "tbl_categories". To have a n-n relationship a helper table ("tbl_articles_categories") is needed. Here are all necessary Entities: @Entity @Table( name = "tbl_articles" ) public class Article implements Serializable { private static final long serialVersionUID = 1L; @Id @Column( nullable = false ) @GeneratedValue( strategy = GenerationType.IDENTITY ) private Integer id; // other fields... public Integer getId() { return id; } public void setId( Integer id ) { this.id = id; } // other fields... } @Entity @Table( name = "tbl_categories" ) public class Category implements Serializable { private static final long serialVersionUID = 1L; @Id @Column( nullable = false ) @GeneratedValue( strategy = GenerationType.IDENTITY ) private Integer id; // other fields public Integer getId() { return id; } public void setId( Integer id ) { this.id = id; } // other fields... } @Entity @Table( name = "tbl_articles_categories" ) @AssociationOverrides({ @AssociationOverride( name = "pk.article", joinColumns = @JoinColumn( name = "article_id" ) ), @AssociationOverride( name = "pk.category", joinColumns = @JoinColumn( name = "category_id" ) ) }) public class ArticleCategory { private ArticleCategoryPK pk = new ArticleCategoryPK(); public void setPk( ArticleCategoryPK pk ) { this.pk = pk; } @EmbeddedId public ArticleCategoryPK getPk() { return pk; } @Transient public Article getArticle() { return pk.getArticle(); } public void setArticle( Article article ) { pk.setArticle( article ); } @Transient public Category getCategory() { return pk.getCategory(); } public void setCategory( Category category ) { pk.setCategory( category ); } } @Embeddable public class ArticleCategoryPK implements Serializable { private static final long serialVersionUID = 1L; @ManyToOne @ForeignKey( name = "tbl_articles_categories_fkey_article_id" ) private Article article; @ManyToOne @ForeignKey( name = "tbl_articles_categories_fkey_category_id" ) private Category category; public ArticleCategoryPK( Article article, Category category ) { setArticle( article ); setCategory( category ); } public ArticleCategoryPK() { } public Article getArticle() { return article; } public void setArticle( Article article ) { this.article = article; } public Category getCategory() { return category; } public void setCategory( Category category ) { this.category = category; } } Now, I'm getting a serial type what I wanted in my articles table as well as in my categories table. But looking into my helper table, there aren't the expected fields article_id and category_id each of type integer - instead there are article and category of type bytea. What's wrong here? EDIT: Sorry, forgot to mention that I'm using PostgreSQL.

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • jQuery $.data(): Possible misuse?

    - by Rosarch
    Perhaps I'm using $.data incorrectly. Assigning the data: var course_li = sprintf('<li class="draggable course">%s</li>', course["fields"]["name"]); $(course_li).data('pk', course['pk']); alert(course['pk']); // shows a correct value Moving the li to a different ul: function moveToTerm(item, term) { item.fadeOut(function() { item.appendTo(term).fadeIn(); }); } Trying to access the data later: $.each($(term).children(".course"), function(index, course) { var pk = $(course).data('pk'); // pk is undefined courses.push(pk); }); What am I doing wrong? I have confirmed that the course li on which I am setting the data is the same as the one on which I am looking for it. (Unless I'm messing that up by calling appendTo() on it?)

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Javascript How to automatically change word when click without need to refresh browser.?

    - by Fakhrul Zakry
    im quite lost here and not really expert about javascript. I want to change the content when user click with "Thanks for vote" automatically without need to refresh the page. Here is my html: {% if poll.privacy == "own" and request.user.get_profile.parliment != poll.location %} You do not have permission to vote this. {% else %} {% if has_vote %} {% if poll.rating_option == '1to5' %} <div class="rate"> <div id="poll-rate-{{ poll.pk }}"></div> </div> {% else %} Thanks for your vote. {% endif %} {% else %} {% if poll.rating_option == 'yes_no' %} <a href="javascript:void(0)" class="rate btn btn-xs btn-success mr5 vote-positive" rel="{% url 'vote_vote' poll.pk 1 %}" alt="{{ poll.pk }}">Yes</a> <a href="javascript:void(0)" class="rate btn btn-xs btn-danger vote-negative" rel="{% url 'vote_vote' poll.pk 0 %}" alt="{{ poll.pk }}">No</a> {% elif poll.rating_option == 'like_dislike' %} <a href="javascript:void(0)" class="rate btn btn-xs btn-success mr5 vote-positive" rel="{% url 'vote_vote' poll.pk 1 %}" alt="{{ poll.pk }}">Like</a> <a href="javascript:void(0)" class="rate btn btn-xs btn-danger vote-negative" rel="{% url 'vote_vote' poll.pk 0 %}" alt="{{ poll.pk }}">Dislike</a> {% elif poll.rating_option == '1to5' %} <div class="rate"> <div id="poll-rate-{{ poll.pk }}"></div> </div> {% endif %} {% endif %} {% endif %} and here is my javascript: function bindVoteHandler() { $('a.vote-positive, a.vote-negative').click(function(event) { event.preventDefault(); var link = $(this).attr('rel'); var poll_pk = $(this).attr('alt'); var selected_div = $(this).parent('div'); selected_div.html('<img src="{{ STATIC_URL }}img/loading_small.gif" />'); $.ajax(link).done(function( data ) { var result_div = $('div#vote-result-'+poll_pk); result_div.html(data); result_div.removeClass('vote-result-grey-out'); selected_div.html('<small>Thanks for your vote.</small>'); }); }); }; did anyone know what is the problem why i need to refresh my page after Like/Vote/rate to make it become (Thanks For your vote) ? please someone know help or share link with me. Below is the image: before click Like: after click Like: then when refreshed the word just displayed, it supposed automatically display when click Like. Thank you in advance..

    Read the article

  • Java - Regex problem

    - by Yatendra Goel
    I have a list of urls of type http://www.abc.com/pk/ca and http://www.abc.com/pk Now, I want to find out only those urls that ends with /pk or /pk/ and don't have anything in between .com and /pk

    Read the article

  • Java - Regex problem

    - by Yatendra Goel
    I have list of urls of types: http://www.abc.com/pk/etc http://www.abc.com/pk/etc/ http://www.abc.com/pk/etc/etc where etc can be anything. So I want to search only those urls that contains www.abc.com/pk/etc or www.abc.com/pk/etc/

    Read the article

  • Are these tables respect the 3NF Database Normalization?

    - by penas
    AUTHOR table Author_ID, PK First_Name Last_Name TITLES table TITLE_ID, PK NAME Author_ID, FK DOMAIN table DOMAIN_ID, PK NAME TITLE_ID, FK READERS table READER_ID, PK First_Name Last_Name ADDRESS CITY_ID, FK PHONE CITY table CITY_ID, PK NAME BORROWING table BORROWING_ID,pk READER_ID, fk TITLE_ID, fk DATE HISTORY table READER_ID TITLE_ID DATE_OF_BORROWING DATE_OF_RETURNING Are these tables respect the 3NF Database Normalization? What if 2 authors work together for the same title? The column Addresss should have it's own table? When a reader borrows a book, I make an entry in BORROWING table. After he returns the book, I delete that entry and I make another one entry in HISTORY table. Is this a good idea? Do I brake any rule? Should I have instead one single BORROWING table with a DATE_OF_RETURNING column?

    Read the article

  • NHibernate - Mapping tagging of entities

    - by ZNS
    Hi! I've been wrecking my mind on how to get my tagging of entities to work. I'll get right into some database structuring: tblTag TagId - int32 - PK Name tblTagEntity TagId - PK EntityId - PK EntityType - string - PK tblImage ImageId - int32 - PK tblBlog BlogId - int32 - PK class Image Id EntityType { get { return "MyNamespace.Entities.Image"; } IList Tags; class Blog Id EntityType { get { return "MyNamespace.Entities.Blog"; } IList Tags; The obvious problem I have here is that EntityType is an identifer but doesn't exist in the database. If anyone could help with the this mapping I'd be very grateful.

    Read the article

  • NHibernate and Composite Key References

    - by Rich
    I have a weird situation. I have three entities, Company, Employee, Plan and Participation (in retirement plan). Company PK: Company ID Plan PK: Company ID, Plan ID Employee PK: Company ID, SSN, Employee ID Participation PK: Company ID, SSN, Plan ID The problem is in linking the employee to the participation. From a DB perspective, participation should have Employee ID in the PK (it's not even in table). But it doesn't. NHibernate won't let me map the "has many" because the link expects 3 columns (since Employee PK has 3 columns), but I'd only provide 2. Any ideas on how to do this?

    Read the article

  • Database table design vs. ease of use.

    - by Gastoni
    I have a table with 3 fields: color, fruit, date. I can pick 1 fruit and 1 color, but I can do this only once each day. examples: red, apple, monday red, mango, monday blue, apple, monday blue, mango, monday red, apple, tuesday The two ways in which I could build the table are: 1.- To have color, fruit and date be a composite primary key (PK). This makes it easy to insert data into the table because all the validation needed is done by the database. PK color PK fruit PK date 2.- Have and id column set as PK and then all the other fields. Many say thats the way it should be, because composite PKs are evil. For example, CakePHP does no support them. PK id color fruit date Both have advantages. Which would be the 'better' approach?

    Read the article

  • OWA, Outlook Anywhere, RPCPing Inconsistencies

    - by pk.
    I'm troubleshooting an Outlook Anywhere issue with a new Exchange 2010 server. The server in question, MS2010, is behind a SonicWALL NSA 2400 device and works wonderfully except for Outlook Anywhere. Outlook Anywhere works internally and I've verified (through Ctrl-Right Click --> Connection Status) that I'm able to connect to MS2010 over HTTPS. When trying to connect to the server using HTTPS from outside the firewall, I'm unable to do so. A Wireshark trace shows 30 or so successful HTTPS packet transmissions, and then it fails with 3 straight transmissions to a destination port of 135. I have no idea why my computer is attempting to access anything on port 135 since I've setup my profile to use HTTPS on both slow and fast connections. I'm 99% certain that the firewall is configured correctly. I run Outlook Web Access (also HTTPS) on the same server and there are no issues with access. EDIT: My Autodiscover settings are correct (as far as I can tell). My server passes the Outlook Anywhere and Autodiscover tests at https://www.testexchangeconnectivity.com/. I've been using the RPCPing utility to troubleshoot and have come across the following results: Internally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v2.12. Copyright (C) Microsoft Corporation, 2002 OS Version is: 6.1, Service Pack 1 RPCPinging proxy server mail.mydomain.com with Echo Request Packet Sending ping to server Response from server received: 200 Pinging successfully completed in 93 ms Externally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v6.0. Copyright (C) Microsoft Corporation, 2002-2006 Enter password for RPC/HTTP proxy: RPCPing set Activity ID: {fc8411ba-2987-4175-b37b-801dc69d5ff9} RPCPinging proxy server mail.mydomain.com with Echo Request Packet Setting autologon policy to high WinHttpSetCredentials for target server called Error 87 : The parameter is incorrect. returned in WinHttpSetCredentials Ping failed What should I be checking in order to troubleshoot my Outlook Anywhere issues? I'm using Windows 7 SP1 for internal and external access. EDIT: Autodiscover.xml content <?xml version="1.0"?> <Autodiscover xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"> <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a"> <User> <DisplayName>John Doe</DisplayName> <LegacyDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=pk</LegacyDN> <DeploymentId>d35170cc-f3a7-42c5-9427-1f554a469126</DeploymentId> </User> <Account> <AccountType>email</AccountType> <Action>settings</Action> <Protocol> <Type>EXCH</Type> <Server>MS2010.MYDOMAIN.local</Server> <ServerDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010</ServerDN> <ServerVersion>738180DA</ServerVersion> <MdbDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010/cn=Microsoft Private MDB</MdbDN> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> <OOFUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</OOFUrl> <OABUrl>http://MS2010.MYDOMAIN.local/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://MS2010.MYDOMAIN.local/EWS/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <PublicFolderServer>MS2007.MYDOMAIN.local</PublicFolderServer> <AD>dc1.MYDOMAIN.local</AD> <EwsUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</EwsUrl> <EcpUrl>https://MS2010.MYDOMAIN.local/ecp/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>EXPR</Type> <Server>mail.mycompany.com</Server> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> <OOFUrl>https://mail.mycompany.com/ews/exchange.asmx</OOFUrl> <OABUrl>https://mail.mycompany.com/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://mail.mycompany.com/ews/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <SSL>On</SSL> <AuthPackage>Basic</AuthPackage> <CertPrincipalName>msstd:mail.mycompany.com</CertPrincipalName> <EwsUrl>https://mail.mycompany.com/ews/exchange.asmx</EwsUrl> <EcpUrl>https://mail.mycompany.com/owa/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>WEB</Type> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <Internal> <OWAUrl AuthenticationMethod="Basic, Fba">https://MS2010.MYDOMAIN.local/owa/</OWAUrl> <Protocol> <Type>EXCH</Type> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> </Protocol> </Internal> <External> <OWAUrl AuthenticationMethod="Fba">https://mail.mycompany.com/owa/</OWAUrl> <Protocol> <Type>EXPR</Type> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> </Protocol> </External> </Protocol> </Account> </Response> </Autodiscover>

    Read the article

  • heimdal kerberos in openldap issue

    - by Brian
    I think I posted this on the wrong 'sister site', so here it is. I'm having a bit of trouble getting Kerberos (Heimdal version) to work nicely with OpenLDAP. The kerberos database is being stored in LDAP itself. The KDC uses SASL EXTERNAL authentication as root to access the container ou. I created the database in LDAP fine using kadmin -l, but it won't let me use kadmin without the -l flag: root@rds0:~# kadmin -l kadmin> list * krbtgt/REALM kadmin/changepw kadmin/admin changepw/kerberos kadmin/hprop WELLKNOWN/ANONYMOUS WELLKNOWN/org.h5l.fast-cookie@WELLKNOWN:ORG.H5L default brian.empson brian.empson/admin host/rds0.example.net ldap/rds0.example.net host/localhost kadmin> exit root@rds0:~# kadmin kadmin> list * brian.empson/admin@REALM's Password: <----- With right password kadmin: kadm5_get_principals: Key table entry not found kadmin> list * brian.empson/admin@REALM's Password: <------ With wrong password kadmin: kadm5_get_principals: Already tried ENC-TS-info, looping kadmin> I can get tickets without a problem: root@rds0:~# klist Credentials cache: FILE:/tmp/krb5cc_0 Principal: brian.empson@REALM Issued Expires Principal Nov 11 14:14:40 2012 Nov 12 00:14:37 2012 krbtgt/REALM@REALM Nov 11 14:40:35 2012 Nov 12 00:14:37 2012 ldap/rds0.example.net@REALM But I can't seem to change my own password without kadmin -l: root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Right password New password: Verify password - New password: Auth error : Authentication failed root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Wrong password kpasswd: krb5_get_init_creds: Already tried ENC-TS-info, looping kadmin's logs are not helpful at all: 2012-11-11T13:48:33 krb5_recvauth: Key table entry not found 2012-11-11T13:51:18 krb5_recvauth: Key table entry not found 2012-11-11T13:53:02 krb5_recvauth: Key table entry not found 2012-11-11T14:16:34 krb5_recvauth: Key table entry not found 2012-11-11T14:20:24 krb5_recvauth: Key table entry not found 2012-11-11T14:20:44 krb5_recvauth: Key table entry not found 2012-11-11T14:21:29 krb5_recvauth: Key table entry not found 2012-11-11T14:21:46 krb5_recvauth: Key table entry not found 2012-11-11T14:23:09 krb5_recvauth: Key table entry not found 2012-11-11T14:45:39 krb5_recvauth: Key table entry not found The KDC reports that both accounts succeed in authenticating: 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:48:03 sending 294 bytes to IPv4:192.168.72.10 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 ENC-TS Pre-authentication succeeded -- brian.empson@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 ENC-TS pre-authentication succeeded -- brian.empson@REALM 2012-11-11T14:48:03 AS-REQ authtime: 2012-11-11T14:48:03 starttime: unset endtime: 2012-11-11T14:53:00 renew till: unset 2012-11-11T14:48:03 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 sending 704 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:45:39 sending 303 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 ENC-TS Pre-authentication succeeded -- brian.empson/admin@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 ENC-TS pre-authentication succeeded -- brian.empson/admin@REALM 2012-11-11T14:45:39 AS-REQ authtime: 2012-11-11T14:45:39 starttime: unset endtime: 2012-11-11T15:45:39 renew till: unset 2012-11-11T14:45:39 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 sending 717 bytes to IPv4:192.168.72.10 I wish I had more detailed logging messages, running kadmind in debug mode seems to almost work but it just kicks me back to the shell when I type in the correct password. GSSAPI via LDAP doesn't work either, but I suspect it's because some parts of kerberos aren't working either: root@rds0:~# ldapsearch -Y GSSAPI -H ldaps:/// -b "o=mybase" o=mybase SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () root@rds0:~# ldapsearch -Y EXTERNAL -H ldapi:/// -b "o=mybase" o=mybase SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 # extended LDIF <snip> Would anyone be able to point me in the right direction?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >