Search Results

Search found 18272 results on 731 pages for 'ldap query'.

Page 660/731 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • How to nest joins with CakePHP?

    - by Daren Thomas
    I'm trying to behave. So, instead of using following SQL syntax: select * from tableA INNER JOIN tableB on tableA.id = tableB.tableA_id LEFT OUTER JOIN ( tableC INNER JOIN tableD on tableC.tableD_id = tableD.id) on tableC.tableA_id = tableA.id I'd like to use the CakePHP model->find(). This will let me use the Paginator too, since that will not work with custom SQL queries as far as I understand (unless you hardcode one single pagination query to the model which seems a little inflexible to me). What I've tried so far: /* inside tableA_controller.php, inside an action, e.g. "view" */ $this->paginate['recursive'] = -1; # suppress model associations for now $this->paginate['joins'] = array( array( 'table' => 'tableB', 'alias' => 'TableB', 'type' => 'inner', 'conditions' => 'TableB.tableA_id = TableA.id', ), array( 'table' => 'tableC', 'alias' => 'TableC', 'type' => 'left', 'conditions' => 'TableC.tableA_id = TableA.id', 'joins' = array( # this would be the obvious way to do it, but doesn't work array( 'table' => 'tableD', 'alias' => 'TableD', 'type' => 'inner', 'conditions' => 'TableC.tableD_id = TableD.id' ) ) ) ) That is, nesting the joins into the structure. But that doesn't work (CakePHP just ignores the nested 'joins' element which was kind of what I expected, but sad. I have seen hints in comments on how to do subqueries (in the where clause) using a statement builder. Can a similar trick be used here?

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • Uninterruptible Windows Process

    - by Nullstr1ng
    Hi Guys, We're starting a new custom project right now from a client and one of the requirements is the process cannot be terminated unless the system is shutting down, restarting, or logging-off. This application monitors the USB interface. We will be using WMI to query the device periodically. The client want's to run the application on Windows XP Operating System and doesn't like installing .NET. So we targeted Visual Basic 6 as our language. My main concern is this application cannot be terminated. Our Project Adviser talks about Anti-virus and yes, some of the anti virus cannot be terminated. I was thinking how to do the same in Visual Basic 6. I know there will be API involved on the project but where should I go? so API is ok with me. I saw some articles that converts the EXE to a SERVICE, create Windows Service in Visual Basic 6, etc. So please .. share your thoughts.

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • Inner join and outer join options in Entity Framework 4.0

    - by bigb
    I am using EF 4.0 and I need to implement query with one inner join and with N outer joins I started to implement this using different approaches but get into trouble at some point. Here is two examples how I started of doing this using ObjectQuery<'T' and Linq to Entity 1)Using ObjectQuery<'T' I implement flexible outer join but I don't know how to perform inner join with entity Rules in that case (by default Include("Rules") doing outer join, but i need to inner join by Id). public static IEnumerable<Race> GetRace(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); ObjectQuery<Race> result = (ObjectQuery<Race>)repository.AsQueryable<Race>(); //perform outer joins with related entities if (includes != null) foreach (string include in includes) result = result.Include(include); //here i need inner join insteard of default outer join result = result.Include("Rules"); return result.ToList(); } 2)Using Linq To Entity I need to have kind of outer join(somethin like in GetRace()) where i may pass a List with entities to include) and also i need to perform correct inner join with entity Rules public static IEnumerable<Race> GetRace2(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); IEnumerable<Race> result = from o in repository.AsQueryable<Race>() from b in o.RaceBetRules select new { o }); //I need here: // 1. to perform the same way inner joins with related entities like with ObjectQuery above //here i getting List<AnonymousType> which i cant cast to //IEnumerable<Race> when i did try to cast like //(IEnumerable<Race>)result.ToList(); i did get error: //Unable to cast object of type //'System.Collections.Generic.List`1[<>f__AnonymousType0`1[BetsTipster.Entity.Tip.Types.Race]]' //to type //'System.Collections.Generic.IEnumerable`1[BetsTipster.Entity.Tip.Types.Race]'. return result.ToList(); } May be someone have some ideas about that.

    Read the article

  • A Digg-like rotating homepage of popular content, how to include date as a factor?

    - by Ferdy
    I am building an advanced image sharing web application. As you may expect, users can upload images and others can comments on it, vote on it, and favorite it. These events will determine the popularity of the image, which I capture in a "karma" field. Now I want to create a Digg-like homepage system, showing the most popular images. It's easy, since I already have the weighted Karma score. I just sort on that descendingly to show the 20 most valued images. The part that is missing is time. I do not want extremely popular images to always be on the homepage. I guess an easy solution is to restrict the result set to the last 24 hours. However, I'm also thinking that in order to keep the image rotation occur throughout the day, time can be some kind of variable where its offset has an influence on the image's sorting. Specific questions: Would you recommend the easy scenario (just sort for best images within 24 hours) or the more sophisticated one (use datetime offset as part of the sorting)? If you advise the latter, any help on the mathematical solution to this? Would it be best to run a scheduled service to mark images for the homepage, or would you advise a direct query (I'm using MySQL) As an extra note, the homepage should support paging and on a quiet day should include entries of days before in order to make sure it is always "filled" I'm not asking the community to build this algorithm, just looking for some advise :)

    Read the article

  • Write a sql for updating data based on time

    - by Lu Lu
    Hello everyone, Because I am new with SQL Server and T-SQL, so I will need your help. I have 2 table: Realtime and EOD. To understand my question, I give example data for 2 tables: ---Realtime table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 // this day is not existed in EOD -> inserting BBC 1/3/2009 03:05:01 458 // this day is not existed in EOD -> inserting ABC 1/2/2009 03:05:01 326 // this day is new -> updating BBC 1/2/2009 03:05:01 454 // this day is new -> updating ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 ABC 1/2/2009 01:05:01 313 BBC 1/2/2009 01:05:01 423 ---EOD table--- Symbol Date Value ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 I will need to create a store procedure to update value of symbols. If data in day of a symbol is new (compare between Realtime & EOD), it will update value and date for EOD at that day if existing, otherwise inserting. And store will update EOD table with new data: ---EOD table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 BBC 1/3/2009 03:05:01 458 ABC 1/2/2009 03:05:01 326 BBC 1/2/2009 03:05:01 454 P/S: I use SQL Server 2005. And I have a similar answered question at here: http://stackoverflow.com/questions/2726369/help-to-the-way-to-write-a-query-for-the-requirement Please help me. Thanks.

    Read the article

  • JDO difficulties in retrieving persistent vector

    - by Michael Omer
    I know there are already some posts regarding this subject, but although I tried using them as a reference, I am still stuck. I have a persistent class as follows: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class GameObject implements IMySerializable{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) protected Key m_databaseKey; @NotPersistent private final static int END_GAME_VAR = -1000; @Persistent(defaultFetchGroup = "true") protected GameObjectSet m_set; @Persistent protected int m_databaseType = IDatabaseAccess.TYPE_NONE; where GameObjectSet is: @PersistenceCapable(identityType = IdentityType.APPLICATION) @FetchGroup(name = "mySet", members = {@Persistent(name = "m_set")}) public class GameObjectSet { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key id; @Persistent private Vector<GameObjectSetPair> m_set; and GameObjectSetPair is: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class GameObjectSetPair { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key id; @Persistent private String key; @Persistent(defaultFetchGroup = "true") private GameObjectVar value; When I try to fetch the entire structure by fetching the GameObject, the set doesn't have any elements (they are all null) I tried adding the fetching group to the PM, but to no avail. This is my fetching code Vector<GameObject> ret = new Vector<GameObject>(); PersistenceManager pm = PMF.get().getPersistenceManager(); pm.getFetchPlan().setMaxFetchDepth(-1); pm.getFetchPlan().addGroup("mySet"); Query myQuery = pm.newQuery(GameObject.class); myQuery.setFilter("m_databaseType == objectType"); myQuery.declareParameters("int objectType"); try { List<GameObject> res = (List<GameObject>)myQuery.execute(objectType); ret = new Vector<GameObject>(res); for (int i = 0; i < ret.size(); i++) { ret.elementAt(i).getSet(); ret.elementAt(i).getSet().touchSet(); } } catch (Exception e) { } finally { pm.close(); } Does anyone have any idea? Thanks Mike

    Read the article

  • MySQL: Complex Join Statement involving two tables and a third correlation table

    - by Stephen
    I have two tables that were built for two disparate systems. I have records in one table (called "leads") that represent customers, and records in another table (called "manager") that are the exact same customers but "manager" uses different fields (For example, "leads" contains an email address, and "manager" contains two fields for two different emails--either of which might be the email from "leads"). So, I've created a correlation table that contains the lead_id and manager_id. currently this correlation table is empty. I'm trying to query the "leads" table to give me records that match either "manager" email field with the single "leads" email field, while at the same time ignoring fields that have already been added to the "correlated" table. (this way I can see how many leads that match have not yet been correlated.) Here's my current, invalid SQL attempt: SELECT leads.id, manager.id FROM leads, manager LEFT OUTER JOIN correlation ON correlation.lead_id = leads.id WHERE correlation.id IS NULL AND leads.project != "someproject" AND (manager.orig_email = leads.email OR manager.dest_email = leads.email) AND leads.created BETWEEN '1999-01-01 00:00:00' AND '2010-05-10 23:59:59' ORDER BY leads.created ASC; I get the error: Unknown column 'leads.id' in 'on clause' Before you wonder: there are records in the "leads" table where leads.project != "someproject" and leads.created falls between those dates. I've included those additional parameters for completeness.

    Read the article

  • Full Text Search in MSSQL2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with MSSQL2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQLServer cannot find it. So I try to investigate it by trying the following query in SQLServer: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • Select * from 'many to many' SQL relationship

    - by Rampant Creative Group
    I'm still learning SQL and my brain is having a hard time with this one. Say I have 3 tables: teams players and teams_players as my link table All I want to do is run a query to get each team and the players on them. I tried this: SELECT * FROM teams INNER JOIN teams_players ON teams.id = teams_players.team_id INNER JOIN players ON teams_players.player_id = players.id But it returned a separate row for each player on each team. Is JOIN the right way to do it or should I be doing something else? ----------------------------------------- Edit Ok, so from what I'm hearing, this isn't necessarily a bad way to do it. I'll just have to group the data by team while I'm doing my loop. I have not yet tried the modified SQL statements provided, but I will today and get back to you. To answer the question about structure - I guess I wasn't thinking about the returned row structure which is part of what lead to my confusion. In this particular case, each team is limited to 4 players (or less) so I guess the structure that would be helpful to me is something like the following: teams.id, teams.name, players.id, players.name, players.id, players.name, players.id, players.name, players.id, players.name, 1 Team ABC 1 Jim 2 Bob 3 Ned 4 Roy 2 Team XYZ 2 Bob 3 Ned 5 Ralph 6 Tom

    Read the article

  • NHibernate CreateSqlQuery and object graph

    - by magellings
    Hello I'm a newbie to NHibernate. I'd like to make one sql query to the database using joins to my three tables. I have an Application with many Roles with many Users. I'm trying to get NHibernate to properly form the object graph starting with the Application object. For example, if I have 10 application records, I want 10 application objects and then those objects have their roles which have their users. What I'm getting however resembles a Cartesian product in which I have as many Application objects as total User records. I've looked into this quite a bit and am not sure if it is possible to form the application hierarchy correctly. I can only get the flattened objects to come back. It seems "maybe" possible as in my research I've read about "grouped joins" and "hierarchical output" with an upcoming LINQ to NHibernate release. Again though I'm a newbie. [Update Based on Frans comment in Ayende's post here I'm guessing what I want to do is not possible http://ayende.com/Blog/archive/2008/12/01/solving-the-select-n1-problem.aspx ] Thanks for you time in advance. Session.CreateSQLQuery(@"SELECT a.ID, a.InternalName, r.ID, r.ApplicationID, r.Name, u.UserID, u.RoleID FROM dbo.[Application] a JOIN dbo.[Roles] r ON a.ID = r.ApplicationID JOIN dbo.[UserRoleXRef] u ON u.RoleID = r.ID") .AddEntity("app", typeof(RightsBasedSecurityApplication)) .AddJoin("role", "app.Roles") .AddJoin("user", "role.RightsUsers") .List<RightsBasedSecurityApplication>().AsQueryable();

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • Modify request querystring paramaters to build a new link without resorting to string manipulation

    - by Andrew M
    Hi All, I want to dynamically populate a link with the URI of the current request, but set one specific query string parameter. All other querystring paramaters (if there are any) should be left untouched. And I don't know in advance what they might be. Eg, imagine I want to build a link back to the current page, but with the querystring parameter "valueOfInterest" always set to be "wibble" (I'm doing this from the code-behind of an aspx page, .Net 3.5 in C# FWIW). Eg, a request for either of these two: /somepage.aspx /somepage.aspx?valueOfInterest=sausages would become: /somepage.aspx?valueOfInterest=wibble And most importantly (perhaps) a request for: /somepage.aspx?boring=something /somepage.aspx?boring=something&valueOfInterest=sausages would preserve the boring params to become: /somepage.aspx?boring=something&valueOfInterest=wibble Caveats: I'd like to avoid string manipulation if there's something more elegant in asp.net that is more robust. However if there isn't something more elegant, so be it. I've done (a little) homework: I found a blog post which suggested copying the request into a local HttpRequest object, but that still has a read-only collection for the querystring params. I've also had a look at using a URI object, but that doesn't seem to have a querystring

    Read the article

  • Right way of making muti-site and multi-lingual website on codeigniter

    - by DR.GEWA
    Hi there. Beforehand let me thank you all !! Really guys you help a lot. When I will finish my web site and will have much time on watching how userbase is growing I will come here again and again to answer to another people questions(if I can ) So here is the problem. I made a web-site on CodeIgniter. A social network engine. Something like phpfox, classmates_com or facebook. It's right now somehow not multilingual, So the UI strings are in the view files, and next step will be move them to the language files. I want the user to have ability to change the language. So I assume that in database user will have row "lang_local" which would be by default set to en, and then to any other language he will change . So what is eating my nervs and enery is following. I will make on this engine several demographic social networks,and I would like to manage theese web-sites in centralized manner with one backend . So whenever I would like to make a new web-network, I just add the domain settings install the script in new folder and add it in database sites I see it like this on every table in database like users,comments,messages,categories ,etc I will have a row site_id , and on each query add/update/delete I add a WHERE SITE_ID=XXX and in table sites(site_id,site_name,domain_name) will have all domains , so that in backend I can filter data by website. Is this a good way? What if i will need then to be multiserver, what about load balancing? Who can tell me what would be a right,PROFESSIONAL way? My maximum user limit for a database is something like for start 10.000 in one-two year 100.000users

    Read the article

  • Access 2003 VBA: Return only the index of the last item selected in a ListBox

    - by Eric D. Johnson
    I will preface this with saying, this is my first time using listboxes and earlier posts were criticized for lacking detail. So, all help is greatly appreciated and I hope this is enough information without being overkill. Currently, I have a listbox updating a junction table with an on click event (iterates through selected items and if they are not in the table it adds them). The list box is also updated by an option group (based on the option group value a query populates the list with the appropriate items and they are selected/highlighted based on the junction table). Also, when items are a "sub-category" the "category" is also selected. This functions perfectly until I ask it to do more... Problem 1: I need to differentiate "categories" of items from each other. So, I have included a blank item to the list box to add a space between categories. When the blank items are present the listbox does not update the junction table properly and vice versa. Problem 2: My users want to be able to deselect the "category" under certain circumstances. This is fine, just de-select the "category" after the "sub-category" is selected. However, the "category" is re-selected whenever the listbox is clicked again because it iterates through all entries. Perceived solution for both problems: Return only the index of the item (de)selected and manipulate accordingly. Is this possible? If so, how? OR: Should I take a different approach?

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Google App Engine Needs Index Error

    - by Andrew Johnson
    I am currently getting a needs index error on my app engine app: http://www.gaiagps.com/wiki/home. I believe this index should have been created automatically by my index.yaml file (see below). Googling a bit, I think I just need to wait for my index to be built. Is this correct, or do I need to do something manually? Is there some sort of index-building queue? My tables are very, very small right now. EDIT: I added the line "indexes:" to my app.yaml, and now app engine reports the index is building, so I think this is fixed. It's weird that this file was wrong considering I've never touched it. indexes: # AUTOGENERATED # This index.yaml is automatically updated whenever the dev_appserver # detects that a new type of query is run. If you want to manage the # index.yaml file manually, remove the above marker line (the line # saying "# AUTOGENERATED"). If you want to manage some indexes # manually, move them above the marker line. The index.yaml file is # automatically uploaded to the admin console when you next deploy # your application using appcfg.py. - kind: Revision properties: - name: name - name: created The app works on my dev server, but not in production. However, on my dev console, I have noticed this error (EDIT: THIS ERROR IS GONE NOW THAT I ADDED indexes: to the app.yaml file above): ERROR 2009-10-18 04:46:51,908 dev_appserver_index.py:176] Error parsing /gaiagps.com/index.yaml: 'NoneType' object is not callable in "<string>", line 13, column 3: - kind: Revision ^

    Read the article

  • Association and model data saving problem

    - by Zhlobopotam
    Developing with cakephp 1.3 (latest from github). There are 2 models bind with hasAndBelongsToMany: documents and tags. Document can have many tags in other words. I've add a new document submitting form there user can enter a list of tags separated with commas (new tag will be added, if not exist already). I looked at cakephp bakery 2.0 source code on github and found the solution. But it seems that something is wrong. class Document extends AppModel { public $hasAndBelongsToMany = array('Tag'); public function beforeSave($options = array()) { if (isset($this->data[$this->alias]['tags']) && !empty($this- >data[$this->alias]['tags'])) { $tagIds = $this->Tag->saveDocTags($this->data[$this->alias] ['tags']); unset($this->data[$this->alias]['tags']); $this->data[$this->Tag->alias][$this->Tag->alias] = $tagIds; } return true; } } class Tag extends AppModel { public $hasAndBelongsToMany = array ('Document'); public function saveDocTags($commalist = '') { if ($commalist == '') return null; $tags = explode(',',$commalist); if (empty($tags)) return null; $existing = $this->find('all', array( 'conditions' => array('title' => $tags) )); $return = Set::extract($existing,'/Tag/id'); if (sizeof($existing) == sizeof($tags)) { return $return; } $existing = Set::extract($existing,'/Tag/title'); foreach ($tags as $tag) { if (!in_array($tag, $existing)) { $this->create(array('title' => $tag)); $this->save(); $return[] = $this->id; } } return $return; } } So, new tags creation works well but document model can't save association data and tells: SQL Error: 1054: Unknown column 'Array' in 'field list' Query: INSERT INTO documents (title, content, shortnfo, date, status) VALUES ('Document with tags', '', '', Array, 1) Any ideas how to solve this problem?

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >