Search Results

Search found 54131 results on 2166 pages for 'database project'.

Page 416/2166 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Inserting Rows in Relationship using a Strongly Typed DataSet

    - by Manuel Faux
    I'm using ADO.NET with a strongly typed dataset in C# (.NET 3.5). I want to insert a new row to two tables which are related in an 1:n relation. The table Attachments holds the primary key part of the relation and the table LicenseAttachments holds the foreign key part. AttachmentsDataSet.InvoiceRow invoice; // Set to a valid row, also referenced in InvoiceAttachments AttachmentsDataSet.AttachmentsRow attachment; attachment = attachmentsDataSet.Attachments.AddAttachmentsRow("Name", "Description"); attachmentsDataSet.InvoiceAttachments.AddInvoiceAttachmentsRow(invoice, attachment); Of course when I first update the InvoicesAttachments table, I'll get a foreign key violation from the SQL server, so I tried updating the Attachments table first, which will create the rows, but will remove the attachment association in the InvoiceAttachments table. Why? How do I solve this problem?

    Read the article

  • SQL Server: A Grouping question that's annoying me

    - by user366729
    I've been working with SQL Server for the better part of a decade, and this grouping (or partitioning, or ranking...I'm not sure what the answer is!) one has me stumped. Feels like it should be an easy one, too. I'll generalize my problem: Let's say I have 3 employees (don't worry about them quitting or anything...there's always 3), and I keep up with how I distribute their salaries on a monthly basis. Month Employee PercentOfTotal -------------------------------- 1 Alice 25% 1 Barbara 65% 1 Claire 10% 2 Alice 25% 2 Barbara 50% 2 Claire 25% 3 Alice 25% 3 Barbara 65% 3 Claire 10% As you can see, I've paid them the same percent in Months 1 and 3, but in Month 2, I've given Alice the same 25%, but Barbara got 50% and Claire got 25%. What I want to know is all the distinct distributions I've ever given. In this case there would be two -- one for months 1 and 3, and one for month 2. I'd expect the results to look something like this (NOTE: the ID, or sequencer, or whatever, doesn't matter) ID Employee PercentOfTotal -------------------------------- X Alice 25% X Barbara 65% X Claire 10% Y Alice 25% Y Barbara 50% Y Claire 25% Seems easy, right? I'm stumped! Anyone have an elegant solution? I just put together this solution while writing this question, which seems to work, but I'm wondering if there's a better way. Or maybe a different way from which I'll learn something. WITH temp_ids (Month) AS ( SELECT DISTINCT MIN(Month) FROM employees_paid GROUP BY PercentOfTotal ) SELECT EMP.Month, EMP.Employee, EMP.PercentOfTotal FROM employees_paid EMP JOIN temp_ids IDS ON EMP.Month = IDS.Month GROUP BY EMP.Month, EMP.Employee, EMP.PercentOfTotal Thanks y'all! -Ricky

    Read the article

  • archiving table records to another table by trigger(move daialy table records to weekly table, evry

    - by sirvan
    I have written this trigger in mysql 5: create trigger changeToWeeklly after insert on tbl_daily for each row begin insert into tbl_weeklly SELECT * FROM vehicleslocation v where v.recivedate < curdate(); delete FROM tbl_daily where recivedate < curdate(); end; i want to archive records by date, move yesterday inserted record from dailly to weekly table and last weekly table to mounthly table and deletes this records from previous table this trigger has following error when insert in daily tabled occurred : "Can't update table 'tbl_daily' in stored function/trigger because it is already used by statement which invoked this stored function/trigger." please help me to solve th problem of archive old data in related tables: move yesterday inserted records to weekly table, if there is a reliable solution tell me please.

    Read the article

  • B- trees, B+ trees difference

    - by dta
    In a B- tree you can store both keys and data in the internal/leaf nodes. But in a B+ tree you have to store the data in the leaf nodes only. Is there any advantage of doing the above in a B+ tree? Why not use B- trees instead of B+ trees everywhere? As intuitively they seem much faster. I mean why do you need to replicate the key(data) in a B+ tree?

    Read the article

  • SQL to retrieve the latest records, grouping by unique foreign keys

    - by jbox
    I'm creating query to retrieve the latest posts in a forum using a SQL DB. I've got a table called "Post". Each post has a foreign key relation to a "Thread" and a "User" as well as a creation date. The trick is I don't want to show two posts by the same user or two posts in the same thread. Is it possible to create a query that contains all this logic? # Grab the last 10 posts. SELECT id, user_id, thread_id FROM posts ORDER BY created_at DESC LIMIT 10; # Grab the last 10 posts, max one post per user SELECT id, user_id, thread_id FROM post GROUP BY user_id ORDER BY date DESC LIMIT 10; # Grab the last 10 posts, max one post per user, max one post per thread???

    Read the article

  • Problem performance datawarehouse with lots of indexes

    - by Lieven Cardoen
    Our product takes tests of some 350 candidates at the same time. At the end of the test, results for each candidate are moved to a datawarehouse full of indexes on it. For each test there's some 400 records to be entered in datawarehouse. So 400 x 350 is a lot of records. If there are not much records in the datawarehouse, all goes well. But if there are already lots of records in the datawarehouse, then a lot of inserts fail... Is there a way to have indexes that are only rebuild at the end of the day or isn't that the real problem? Or how would you solve this?

    Read the article

  • is it possible to have an sqlite database in a sqlserver field?

    - by Behrooz
    I think my question seems to be vague. I am trying to save user settings in SQLServer, but the problem can be expressed in this term:"it needs 20 tables with circular dependencies. and i have enough tables to fill 3 database diagrams", so the best way encountered to my brain is to save it as a sqlite database in a field, like this: Index |Name |Data 1 |Behrooz |*sqlite database here* 2 |User1 |*sqlite database here* ... is this way the right way?is it stupid? should i create more tables instead of doing all these? does it increase database fragmention?

    Read the article

  • Light-weight client/server DB?

    - by OverTheRainbow
    Hello, (This question falls between programming and finding a tool, so I guess I'll ask here in SO since it has more activity than SuperUser.) I like the simplicity of SQLite, but by design, it doesn't support concurrent access. The apps I write don't have heavy needs, so I'd like to avoid heavier solutions like MySQL that are more difficult to deploy (remote customers with usually no computer personnel). Does someone know of a good solution that would offer the following features? Client available for VB.Net applications The server itself doesn't have to be a .Net application. Actually, I'd rather a bare-metal server so that it can run even on embedded Linux hosts with less RAM/CPU than regular PC's Easy install: the client part should either be statically linked inside the client application or be available as a single DLL, and the server should just be a single EXE listening for queries, à la Fossil (http://www.fossil-scm.org) clients can locate the server on the LAN by broadcasting data picked up by the server, so users don't have to write down the IP address and paste it into each client open-source, or moderately priced closed-source Thank you.

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • Datamapper, defining your own object methods, how?

    - by Dublinclontarf
    So lets say I have a class like below class List include DataMapper::Resource property :id, Serial property :username, String def self.my_username return self[:username] end end list=List.create(:username=>,'jim') list.my_username When I run this it tells me that the method cannot be found, and on more investigation that you can only define class methods(not object methods) and that class methods don't have access to objects data. Is there any way to have these methods included as object methods and get access to object data? I'm using Ruby 1.8.6 and the latest version of datamapper.

    Read the article

  • When NOT to use Cassandra?

    - by JimJim
    There has been a lot of talk related to Cassandra lately. Twitter, Digg, Facebook, etc all use it. When does it make sense to: use Cassandra, not use Cassandra, and use a RDMS instead of Cassandra.

    Read the article

  • Importing json data into MySQL?

    - by AP257
    Pretty much what the title says :) At the moment I'm using Python to turn the json data into a plain-text tab-separated file, and then mysqlimport to pull that into my MySQL tables. Anyone know a nicer / more direct way?

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • Move million records from MEMORY table to MYISAM table.

    - by Prashant
    Hi, I am looking for a fast way to move records from a MEMORY table to MYISAM table. MEMORY table has around 0.5 million records. Both tables have exactly the same structure (same number of columns, data types etc.). But the MYISAM table is indexed (B-TREE) on a few columns. There are around 25 columns most of which are unsigned integers. I have already tried using "INSERT INTO SELECT * FROM " query. But is there any faster way to do this? Appreciate your help. Prashant

    Read the article

  • How can I speed up queries against tables I cannot add indexes to?

    - by RenderIn
    I access several tables remotely via DB Link. They are very normalized and the data in each is effective-dated. Of the millions of records in each table, only a subset of ~50k are current records. The tables are internally managed by a commercial product that will throw a huge fit if I add indexes or make alterations to its tables in any way. What are my options for speeding up access to these tables?

    Read the article

  • adding one time options to items

    - by rap-uvic
    Hello, I'm building an Event Registration site. For any given event, we'll have a handful of items to choose from. I have a table for these items. For each event we might have special options for users. For example, for one of the events new users get to buy an item which is not available to other users. This may not apply to all the events. For other events we might have some other restriction on items. I will obviously be checking this programmatically on application side. I would like to though, set up a column containing flag in the items table. But I don't find it feasible because this condition may only apply to one particular event. I don't want all the future items to have this column. What is a good approach to take in such a situation? Should I create a special "restrictions" table and just do a join? How would I handle this on the application side?

    Read the article

  • Searching 2 fields at the same time

    - by donpal
    I have a table of first and last names firstname lastname --------- --------- Joe Robertson Sally Robert Jim Green Sandra Jordan I'm trying to search this table based on an input that consists of the full name. For example: input: Joe Robert I thought about using SELECT * FROM tablename WHERE firstname LIKE BUT the table stores the first and last name separately, so I'm not sure how to do the search in this case

    Read the article

  • how to effectively modify index

    - by daedlus
    Hej everyone, problem : I am looking for right way to convert an index from clustered to non-clustered Description : I have a table as below in sybase db: dbo.UserLog Id | UserId |time | .... This is hash partitioned using UserId. Currently it has 2 indexes UserId : non-clustered time: clustered This table has about 20 million records. I now want to make UserId as clustered index and time as non-clustered index. is it correct to user alter index to change from clustered to non-clustered or do i drop index and recreate. does the fact that userId is used in hash partitioning have any implications to this? To me alter seems way to go but I have not yet tried this.

    Read the article

  • Why is Magento so slow?

    - by mr-euro
    Is Magento usually so terrible slow? This is my first experience with it and the admin panel simply takes ages to load and save changes. It is a default installation with the test data. The server it is hosted on serves other non-Magento sites super fast. What is it about the PHP code that Magento uses that makes it so slow, and what can be done to fix it?

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >