Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 619/1156 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • How to create custom get and set methods for Linq2SQL object

    - by optician
    I have some objects which have been created automatically by linq2SQL. I would like to write some code which should be run whenever the properties on these objects are read or changed. Can I use typical get { //code } and set {//code } in my partial class file to add this functionality? Currently I get an error about this member already being defined. This all makes sense. Is it correct that I will have to create a method to function as the entry point for this functionality, as I cannot redefine the get and set methods for this property. I was hoping to just update the get and set, as this would mean I wouldn't have to change all the reference points in my app. But I think I may just have to update it everywhere.

    Read the article

  • ER Diagram flaws

    - by spacker_lechuck
    I have the following ER Diagram for a bank database - customers may have several accounts, accounts may be held jointly by several customers, and each customer is associated with an account set and accounts are members of one or more account sets. What design rules are violated? What modifications should be made and why? So far, a few flaws I'm not sure about are: 1) Redundant owner-address attribute in AcctSets Entity. 2) This ER does not include accounts with multiple owners with different addresses. My Question is: How would I go about fixing these flaws and/or other flaws that I may be missing from my analysis? Thanks!

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • How do I save an object that contains and EntitySet?

    - by Pete
    Say I have an User (mapped to a User table) and the Edit view (in MVC) displays a multiselectlist of Modules (mapped to a Modules table) that user can access, with the Modules pre-selected based on the User's EntitySet of Modules. I have tried saving the User then deleting all User_Modules manually and adding them back based on what's selected on submit, but the User has a null EntitySet for User.User_Modules. I cannot find the correct way to handle this scenario anywhere online. Can anyone help?

    Read the article

  • Using Linq2Sql to insert data into multiple tables using an auto incremented primary key

    - by Thomas
    I have a Customer table with a Primary key (int auto increment) and an Address table with a foreign key to the Customer table. I am trying to insert both rows into the database in one nice transaction. using (DatabaseDataContext db = new DatabaseDataContext()) { Customer newCustomer = new Customer() { Email = customer.Email }; Address b = new Address() { CustomerID = newCustomer.CustomerID, Address1 = billingAddress.Address1 }; db.Customers.InsertOnSubmit(newCustomer); db.Addresses.InsertOnSubmit(b); db.SubmitChanges(); } When I run this I was hoping that the Customer and Address table automatically had the correct keys in the database since the context knows this is an auto incremented key and will do two inserts with the right key in both tables. The only way I can get this to work would be to do SubmitChanges() on the Customer object first then create the address and do SubmitChanges() on that as well. This would create two roundtrips to the database and I would like to see if I can do this in one transaction. Is it possible? Thanks

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • finding numbers of days between two date to make a dynamic columns

    - by Chandradyani
    Dear all, I have a select query that currently produces the following results: DoctorName Team 1 2 3 4 5 6 7 ... 31 Visited dr. As   A                             x    x ...      2 times dr. Sc   A                          x          ...      1 times dr. Gh   B                                  x ...      1 times dr. Nd   C                                     ... x    1 times Using the following query: DECLARE @startDate = '1/1/2010', @enddate = '1/31/2010' SELECT d.doctorname, t.teamname, MAX(CASE WHEN ca.visitdate = 1 THEN 'x' ELSE NULL END) AS 1, MAX(CASE WHEN ca.visitdate = 2 THEN 'x' ELSE NULL END) AS 2, MAX(CASE WHEN ca.visitdate = 3 THEN 'x' ELSE NULL END) AS 3, ... MAX(CASE WHEN ca.visitdate = 31 THEN 'x' ELSE NULL END) AS 31, COUNT(*) AS visited FROM CACTIVITY ca JOIN DOCTOR d ON d.id = ca.doctorid JOIN TEAM t ON t.id = ca.teamid WHERE ca.visitdate BETWEEN @startdate AND @enddate GROUP BY d.doctorname, t.teamname the problem is I want to make the column of date are dynamic for example if ca.visitdate BETWEEN '2/1/2012' AND '2/29/2012' so the result will be : DoctorName Team 1 2 3 4 5 6 7 ... 29 Visited dr. As   A                             x    x ...      2 times dr. Sc   A                          x          ...      1 times dr. Gh   B                                  x ...      1 times dr. Nd   C                                     ... x    1 times Can somebody help me how to get numbers of days between two date and help me revised the query so it can looping MAX(CASE WHEN ca.visitdate = 1 THEN 'x' ELSE NULL END) AS 1 as many as numbers of days? Please please

    Read the article

  • How to differentiate between two similar fields in Linq Join tables

    - by Azhar
    How to differentiate between two select new fields e.g. Description c.Description and lt.Description DataTable lDt = new DataTable(); try { lDt.Columns.Add(new DataColumn("AreaTypeID", typeof(Int32))); lDt.Columns.Add(new DataColumn("CategoryRef", typeof(Int32))); lDt.Columns.Add(new DataColumn("Description", typeof(String))); lDt.Columns.Add(new DataColumn("CatDescription", typeof(String))); EzEagleDBDataContext lDc = new EzEagleDBDataContext(); var lAreaType = (from lt in lDc.tbl_AreaTypes join c in lDc.tbl_AreaCategories on lt.CategoryRef equals c.CategoryID where lt.AreaTypeID== pTypeId select new { lt.AreaTypeID, lt.Description, lt.CategoryRef, c.Description }).ToArray(); for (int j = 0; j< lAreaType.Count; j++) { DataRow dr = lDt.NewRow(); dr["AreaTypeID"] = lAreaType[j].LandmarkTypeID; dr["CategoryRef"] = lAreaType[j].CategoryRef; dr["Description"] = lAreaType[j].Description; dr["CatDescription"] = lAreaType[j].; lDt.Rows.Add(dr); } } catch (Exception ex) { }

    Read the article

  • Best way to store sales tax information

    - by Seph
    When designing a stock management database system (sales / purchases) what would be the best way to store the various taxes and other such amounts? A few of the fields that could be saved are: Unit price excluding tax Unit price including tax Tax per item Total excluding tax (rounded to 2 decimals) Total including tax (rounded to 2 decimals) Total tax (rounded to 2 decimals) Currently the most reasonable solution so far is storing down (roughly) item, quantity, total excluding tax (rounded) and the total tax (rounded). Can anyone suggest some better way of storing this details for a generic system? Also, given the system needs to be robust, what should be done if there were multiple tax values (eg: state and city) which might need to be separated, in this case a separate table would be in order, but would it be considered excessive to just have a rowID and some taxID mapping to a totalTax column?

    Read the article

  • Using outer query result in a subquery in postgresql

    - by brad
    I have two tables points and contacts and I'm trying to get the average points.score per contact grouped on a monthly basis. Note that points and contacts aren't related, I just want the sum of points created in a month divided by the number of contacts that existed in that month. So, I need to sum points grouped by the created_at month, and I need to take the count of contacts FOR THAT MONTH ONLY. It's that last part that's tricking me up. I'm not sure how I can use a column from an outer query in the subquery. I tried something like this: SELECT SUM(score) AS points_sum, EXTRACT(month FROM created_at) AS month, date_trunc('MONTH', created_at) + INTERVAL '1 month' AS next_month, (SELECT COUNT(id) FROM contacts WHERE contacts.created_at <= next_month) as contact_count FROM points GROUP BY month, next_month ORDER BY month So, I'm extracting the actual month that my points are being summed, and at the same time, getting the beginning of the next_month so that I can say "Get me the count of contacts where their created at is < next_month" But it complains that column next_month doesn't exist This is understandable as the subquery knows nothing about the outer query. Qualifying with points.next_month doesn't work either. So can someone point me in the right direction of how to achieve this? Tables: Points score | created_at 10 | "2011-11-15 21:44:00.363423" 11 | "2011-10-15 21:44:00.69667" 12 | "2011-09-15 21:44:00.773289" 13 | "2011-08-15 21:44:00.848838" 14 | "2011-07-15 21:44:00.924152" Contacts id | created_at 6 | "2011-07-15 21:43:17.534777" 5 | "2011-08-15 21:43:17.520828" 4 | "2011-09-15 21:43:17.506452" 3 | "2011-10-15 21:43:17.491848" 1 | "2011-11-15 21:42:54.759225" sum, month and next_month (without the subselect) sum | month | next_month 14 | 7 | "2011-08-01 00:00:00" 13 | 8 | "2011-09-01 00:00:00" 12 | 9 | "2011-10-01 00:00:00" 11 | 10 | "2011-11-01 00:00:00" 10 | 11 | "2011-12-01 00:00:00"

    Read the article

  • Autoincrement uniqueidentifier in C#

    - by drasto
    Basically I want to use uniqueidentifier in similar way as identity. I don't want to insert values into it, It should just insert values automatically, different value for each row. I'm not able to set autoincrement on columns of type uniqueidentifier(the property 'autoincrement' is set to false and is not editable).

    Read the article

  • ORA-01722: invalid number

    - by Lluis Martinez
    I'm getting the infamous invalid number Oracle error. Hibernate is issuing an INSERT with a lot of columns, I want to know just the name of the column giving the problem. Is it possible? I hate Oracle messages, in 15 years they haven't improved a bit (the reason why is beyond my imagination). FYI the insert is this: insert into GEM_INVOICE_HEADER (ENDORSEE_ACCOUNT_ID, INVOICE_CODE, APPROVAL_ORGAN, APROVAL_DATE, APROVAL_REFERENCE, BALANCE_BASE_AMOUNT, BALANCE_DEDUCT_AMOUNT, BALANCE_TOTAL_AMOUNT, BALANCE_VAT_AMOUNT, BALANCE_VAT_DED_AMOUNT, BALANCE_VAT_NOT_DED_AMOUNT, DESCRIPTION, SUPPLIER_INVOICE_NUMBER, INVOICE_DATE, RECEIPT_DATE, MEMO, VAT_INTRACOM, INVOICE_BASE_AMOUNT, INVOICE_VAT_AMOUNT, INVOICE_VAT_DED_AMOUNT, INVOICE_VAT_NOT_DED_AMOUNT, INVOICE_DEDUCT_AMOUNT, INVOICE_TOTAL_AMOUNT, VAT_EXEMPT, RECTIFICATION_SIGN, REASON, LOT, FILE_ID, RETAINED, INSTITUTION_ID, PERIOD_CODE, IS_RECTIFIED, DEFAULT_OFFBUDGET_ACCOUNT, OFFBUDGET_DOC_ID, PHASE_OF_ACCOUNTING, ACCOUNTED_OFF_BUDGET, CANCEL_DOC_ID, BUDGET_TYPE, INVOICE_TYPE, SOURCE_ID, STATE_ID, MANAGER_UNIT_ID, DOCUMENT_TYPE_CODE, ACCOUNTED_DOC_ID, ACCOUNTING_LIST, ENDORSEE_ID, PAYMASTER_ID, SUPPLIER_ID, SUPPLIER_ACCOUNT_ID, PAY_JUSTIFY_ID, PETTY_CASH_ID, DBOID) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • Chain LINQ IQueryable, and end with Stored Procedure

    - by Alex
    I'm chaining search criteria in my application through IQueryable extension methods, e.g.: public static IQueryable<Fish> AtAge (this IQueryable<Fish> fish, Int32 age) { return fish.Where(f => f.Age == age); } However, I also have a full text search stored procedure: CREATE PROCEDURE [dbo].[Fishes_FullTextSearch] @searchtext nvarchar(4000), @limitcount int AS SELECT Fishes.* FROM Fishes INNER JOIN CONTAINSTABLE(Fishes, *, @searchtext, @limitcount) AS KEY_TBL ON Fishes.Id = KEY_TBL.[KEY] ORDER BY KEY_TBL.[Rank] The stored procedure obviously doesn't return IQueryable, however, is it possible to somehow limit the result set for the stored procedure using IQueryable's? I'm envisioning something like .AtAge(5).AboveWeight(100).Fishes_FulltextSearch("abc"). In this case, the fulltext search should execute on a smaller subset of my Fishes table (narrowed by Age and Weight). Is something like this possible? Sample code?

    Read the article

  • Can't figure out how to list all the people that don't live in same City as...

    - by AspOnMyNet
    I’d like to list all the people ( Person table ) that don’t live in same city as those cities listed in Location table. Thus, if Location table holds a record with City=’New York’ and State=’Moon’, but Person table holds a record with FirstName=’Someone’, City=’New York’ and Location=’Mars’, then Someone is listed in the resulting set, since she lives in New York located on Mars and not New York located on Moon, thus we’re talking about different cities with the same name. I tried solving it with the following query, but results are wrong: SELECT Person.FirstName, Person.LastName, Person.City, Person.State FROM Person INNER JOIN Location ON (Person.City <> Location.City AND Person.State = Location.State) OR (Person.City = Location.City AND Person.State <> Location.State) OR (Person.City <> Location.City AND Person.State <> Location.State) ORDER BY Person.LastName; Any ideas?

    Read the article

  • ASP.NET MVC- Bizarre problem - suddennly lost all LINQTO SQL data context objects

    - by MikeD
    I was making an edit to a long existing project. Specifically I added some fields to a table and had to delete the table from the LINQTOSQL designer and re-add it. doesn't Also had to do the same for a view. Mode some other code changes and went to build . Now my project won't build because it can't resolve any of the data context objects (all tables and views) in my code. I don't know what I did or how this happeened. I have many tables and views in the project's L2S data context so I don't wont to try and do over. Please any suggestions on how to resolve this problem are greatly appreciated. Desparate! The error messages I am getting are the familiar The type or namespace name 'equipment' could not be found (are you missing a using directive or an assembly reference?)

    Read the article

  • Saving data to server with user accounts.

    - by AKRamkumar
    Ok, so for an app I am making, I want the user to be able to save data online. On my website, I will provide a web server with tables of UserName/Password/SaveData. How can I do this without crashing the server load? How can I guarantee security ? Is there a Design Pattern for this?Is there a better way of doing this? This is going to be a free application, available to the public and I would like for their settings to be available, no matter the computer they are using. Is there a better way of doing this? I am using MEF for plugins so is there a way I can save plugin data as well?

    Read the article

  • Can I serialize an object if I didn't write the class used to instantiate that object?

    - by Richard77
    Hello, I've a simple class [Serializable] public class MyClass { public String FirstName { get; set: } public String LastName { get; set: } //Bellow is what I would like to do //But, it's not working //I get an exception ContactDataContext db = new ContactDataContext(); public void Save() { Contact contact = new Contact(); contact.FirstName = FirstName; contact.LastName = LastName; db.Contacts.InsertOnSubmit(contact); db.SubmitChanges(); } } I wanted to attach a Save method to the class so that I could call it on each object. When I introduced the above statement which contains ContactDataContext, I got the following error "In assembly ... PublicKeyToken=null' is not marked as serializable" It's clear that the DataContext class is generated by the framework (). I checked and did not see where that class was marked serialize. What can I do to overcome that? What's the rule when I'm not the author of a class? Just go ahead and mark the DataContext class as serializable, and pretend that everything will work? Thanks for helping

    Read the article

  • Access 2007 file picker, replaces all rows with the same choice.

    - by SqlStruggle
    This code is from an Access 2007 project I've been struggling with. The actual mean part is the part where I should put something like "update only current form" DoCmd.RunSQL "Update Korut Set [PikkuKuva]=('" & varFile & "') ;" Could someone please help me with this?` If I use it now, it updates all the tables with the same file picked. Heres the whole code. ' This requires a reference to the Microsoft Office 11.0 Object Library. Dim fDialog As Office.FileDialog Dim varFile As Variant Dim filePath As String ' Set up the File dialog box. Set fDialog = Application.FileDialog(msoFileDialogFilePicker) With fDialog ' Allow the user to make multiple selections in the dialog box. .AllowMultiSelect = False ' Set the title of the dialog box. .Title = "Valitse Tiedosto" ' Clear out the current filters, and then add your own. .Filters.Clear .Filters.Add "All Files", "*.*" ' user picked at least one file. If the .Show method returns ' False, the user clicked Cancel. If .Show = True Then ' Loop through each file that is selected and then add it to the list box. For Each varFile In .SelectedItems DoCmd.SetWarnings True DoCmd.RunSQL "Update Korut Set [PikkuKuva]=('" & varFile & "') ;" Next Else MsgBox "You clicked Cancel in the file dialog box." End If End With

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Re-indexing table; update with from

    - by David Thorisson
    The query says it all, I can't find out the right syntax without without using a for..next UPDATE Webtree SET Webtree.Sorting=w2.Sorting FROM ( SELECT BranchID, CASE WHEN @Index>=ROW_NUMBER() OVER(ORDER BY Sorting ASC) THEN ROW_NUMBER() OVER(ORDER BY Sorting ASC) ELSE ROW_NUMBER() OVER(ORDER BY Sorting ASC)+1 END AS Sorting FROM Webtree w2 WHERE w2.ParentID=@ParentID ) WHERE Webtree.BranchID=w2.BranchID

    Read the article

  • How Can I truncate Multiple Tables in MySql?

    - by Luiscencio
    I need to clear all my inventory tables. I've tryed SELECT 'TRUNCATE TABLE ' + TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME LIKE 'inventory%' but I get this error: Truncated incorrect DOUBLE value: 'TRUNCATE TABLE ' Error Code 1292 if this is the correct way, then what am I doing wrong?

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >