let's say I have this query:
select * from table1 r where r.x = 5
do the speed of this query depends on the number of rows that are present in table1 ?
Hi guys,
i need to create an employee shift database. so i have 3 tables so far, employee, employee_shift, and shift
im suppose to calculate how many shifts an employee has done at the end of the month, my question means, because a month has 30 days some have 28 and 31 days.
this means i need to create in the shift table 31 different variations? one for each day of the month? in order to calculate which employee has worked the most?
in my business relation it says an employee has either 1 or 2 shifts per day therefore do i have to have 60 different rows of variations? im i right or is there an easy way to work it out
I have a hierarchical data structure which I'm displaying in a webpage as a treeview.
I want to data to be ordered to first show nodes ordered alphabetically which have no children, then under these nodes ordered alphabetically which have children. Currently I'm ordering all nodes in one group, which means nodes with children appear next to nodes with no children.
I'm using a recursive method to build up the treeview, which has this LINQ code at it's heart:
var filteredCategory = from c in category
orderby c.Name ascending
where c.ParentCategoryId == parentCategoryId && c.Active == true
select c;
So this is the orderby statement I want to enhance.
Shown below is the database table structure:
[dbo].[Category](
[CategoryId] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](100) NOT NULL,
[Level] [tinyint] NOT NULL,
[ParentCategoryId] [int] NOT NULL,
[Selectable] [bit] NOT NULL CONSTRAINT [DF_Category_Selectable] DEFAULT ((1)),
[Active] [bit] NOT NULL CONSTRAINT [DF_Category_Active] DEFAULT ((1))
Hi, I'm trying to update a field in the database to the sum of its joined values:
UPDATE P
SET extrasPrice = SUM(E.price)
FROM dbo.BookingPitchExtras AS E
INNER JOIN dbo.BookingPitches AS P ON E.pitchID = P.ID
AND P.bookingID = 1
WHERE E.[required] = 1
When I run this I get the following error:
"An aggregate may not appear in the set list of an UPDATE statement."
Any ideas?
I am trying to find all deals information along with how many comments they have received. My query
select deals.*,
count(comments.comments_id) as counts
from deals left join comments on
comments.deal_id=deals.deal_id where
cancelled='N'
But now it only shows the deals that have at least one comment. What is the problem?
I have two tables like this:
Category:
Id Name
------------------
1 Cat1
2 Cat2
Feature:
Id Name CategoryId
--------------------------------
1 F1 1
2 F2 1
3 F3 2
4 F4 2
5 F5 2
In my .Net classes, i have two POCO classes like this:
public class Category
{
public int Id {get;set;}
public int Name {get;set;}
public IList<Feature> Features {get;set;}
}
public class Feature
{
public int Id {get;set;}
public int CategoryId {get;set;}
public int Name {get;set;}
}
I am using a stored proc that returns me a result set by joining these 2 tables.
This is how my Stored Proc returns the result set.
SELECT
c.CategoryId, c.Name Category, f.FeatureId, f.Name Feature
FROM Category c
INNER JOIN
Feature f
ON c.CategoryId = f.CategoryId
ORDER BY c.Name
--Resultset produced by the above query
CategoryId CategoryName FeatureId FeatureName
---------------------------------------------------
1 Cat1 1 F1
1 Cat1 2 F2
2 Cat2 3 F3
2 Cat2 4 F4
2 Cat2 5 F5
Now if i want to build the list of categories in my .Net code, i have to loop thru the result set and add features unless the category changes.
This is how my .Net code looks like that builds Categories and Features.
List<Category> categories = new List<Category>();
Int32 lastCategoryId = 0;
Category c = new Category();
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
//Check if the categoryid is same as previous one.
//If Not, add new category.
//If Yes, dont add the category.
if (lastCategoryId != Convert.ToInt32(reader["CategoryId"]))
{
c = new Category
{
Id = Convert.ToInt32(reader["CategoryId"]),
Name = reader["CategoryName"].ToString()
};
c.Features = new List<Feature>();
categories.Add(c);
}
lastCategoryId = Convert.ToInt32(reader["CategoryId"]);
//Add Feature
c.Features.Add(new Feature
{
Name = reader["FeatureName"].ToString(),
Id = Convert.ToInt32(reader["FeatureId"])
});
}
return categories;
}
I was wondering if there is a better way to do build the list of Categories?
I have sql backups copied from server A to server B on a nightly basis.
We want to move the sql server from server A to server B without much downtime, but the files are very large.
I assumed that performing a differential backup and restore would solve the problem with the databases.
Copy full backup from server A to copy to server B (10+gb)
Open SQL Server Managment Studio on server B
Right mouse on databases
Restore Database
Type in the new DB-name
Choose "From Device" and browse to the backup file
Click Okay. This is now resorting the original "full" backup.
Test new db with dev application - everything works :)
On original database rightmouse on DB Tasks Backup...
Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage)
Copy the diff backup onto the new db
Right mouse on DB Tasks Restore Database
This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error
The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification.
RESTORE HEADERONLY is terminating abnormally.
But if I try to restore using just the differential file I get
System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo)
Any idea how to do it? Is there a better way of restoring backups with limited downtime?
we have a large table (5608782 rows and growing) that has 3 columns Zip1,Zip2, distance
all columns are currently int, we would like to convert this table to use varchars for international usage but need to do a mass import into the new table convert zip < 5 digits to 0 padded varchars 123 becomes 00123 etc. is there a way to do this short of looping over each row and doing the translation programmaticly?
I have a job which is calling 10 other jobs using sp_start_job. The job is having 10 steps, each step calling each sub jobs, When i execute the main job, i can see it started with step 1 and in a few secods it shows 'finished successfully' But the jobs take long time time, and when i see the log mechanism i have put inside , it shows the all the 10 steps are running simultaniously at the back, till it finishes after few hours. My requirement is, it should finish step 1 first and then only step2 should start. aNY HELP PLS ?
Select
id,
sum(amount),
vat
From transactions WHERE id=1;
Each record in this table has a vat percentage, I need to get the total amount in all records, however each amount has to be multiplied by by its vat %.
Is there away to do this without looping through all records?
I just wanted to hear the opinion of Hibernate experts about DB schema generation best practices for Hibernate/JPA based projects. Especially:
What strategy to use when the project has just started? Is it recommended to let Hibernate automatically generate the schema in this phase or is it better to create the database tables manually from earliest phases of the project?
Pretending that throughout the project the schema was being generated using Hibernate, is it better to disable automatic schema generation and manually create the database schema just before the system is released into production?
And after the system has been released into production, what is the best practice for maintaining the entity classes and the DB schema (e.g. adding/renaming/updating columns, renaming tables, etc.)?
Thanks in advance.
Hi all,
I have a table in MsSQL Server 2008 (SP2) containing 30 millios of rows, table size 150GB, there are a couple of int columns and two nvarchar(max) columns: one containing text (from 1-30000 characters) and one containg xml (up to 100000 characters).
Table doesn't have any primary keys or indexes (its is a staging table). So atm I am running a query:
UPDATE [dbo].[stage_table]
SET [column2] = SUBSTRING([column1], 1, CHARINDEX('.', [column1])-1);
the query is running for 3 hours (and it is still not completed), which I think is too long. Is It? I can see that there is constant read rate of 5MB/s and write rate of 10MB/s to .mdf file.
How can I find out why the query is running so long? The "server" is i7, 24GB of ram, SATA disks on RAID 10.
Many thanks!
I have two tables
Publishers and Campaigns, both have similar many-to-many relationships with Countries,Regions,Languages and Categories.
more info
Publisher2Categories has publisherID and categoryID which are foreign keys to publisherID in Publishers and categoryID in Categories which are identity columns. On other side i have Campaigns2Categories with campaignID and categoryID columns which are foreign keys to campaignID in Campaigns and categoryID in Categories which again are identities.
Same goes for Regions, Languages and Countries relationships
I pass to query certain publisherID and want to get campaignIDs of Campaigns that have at least one equal to Publisher value from regions, countries, language or categories
thanks
I have the following columns in TableA
TableA
Column1 varchar
Column2 int
Column3 bit
I am using this statement
IF Column3 = 0
SELECT Column1, Column2 FROM
TableA WHERE
Column2 > 200
ELSE
SELECT Column1, Column2 FROM
TableA WHERE
Column2 < 200
But the statment does not compile. It says Invalid Column Name 'Column3'
I curious to know how people are using table alias. The other developers where I work always use table alias, and always use the alias of a, b, c, ect.
Here's an example
SELECT a.TripNum, b.SegmentNum, b.StopNum, b.ArrivalTime
FROM Trip a, Segment b
WHERE a.TripNum = b.TripNum
I disagree with them, and think table alias should be use more sparingly. I think it should be used when including the same table twice in a query, or when the table name is very long and using a shorter name in the query will make the query easier to read. I also think the alias should be a good name instead of a letter. In the above example if I felt I needed to use 1 letter table alias I would use t for the Trip table and s for the segment table.
I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file)
Edit:
I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt)
Edit 2:
The schema changes include (but are not limited to) the following:
1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other.
2. Moving columns from one table to another.
3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables.
4. Some new columns will be created with default values.
5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.
I have two tables. One has rows with values A-Z and the other has D-G.
How do I make a select statement to return only the values A-C, H-Z?
(This is, of course a dumbed down version of my real tables and problem.)
Hi guys..
The question im struggling with is this:
i have a list of helicopter names in different charters and i need to find out WHICH helicopter has the least amount of charters booked. Once i find that out i need to ONLY display the one that has the least.
I so far have this:
SELECT Helicopter_Name COUNT (Distinct Charter_NUM)
FROM Charter_Table
GROUP BY Helicopter Name
^ this is where i am stuck, i realise MIN could be used to pick out the value that is the smallest but i am not sure how to integrate this into the command.
Something like Where MIN = MIN Value
Id really appreciate it
I have one table, which has three fields and data.
Name , Top , Total
cat , 1 , 10
dog , 2 , 7
cat , 3 , 20
horse , 4 , 4
cat , 5 , 10
dog , 6 , 9
I want to select the record which has highest value of Total for each Name, so my result should be like this:
Name , Top , Total
cat , 3 , 20
horse , 4 , 4
Dog , 6 , 9
I tried group by name order by total, but it give top most record of group by result. Can anyone guide me, please?
A colleague is adding a bit mask to all our database tables. In theory this is so we can track certain properties of each row across the entire system. For example...
Is the row shipped with the system or added by the client once they've started using the system
Has the row been deleted from the table (soft deletes)
Is the row a default value within a set of rows
Is this a good idea? Are there other uses where this approach would be beneficial?
My preference is these properties are obviously important, and having a dedicated column for each property is justified to make what is happening clearer to fellow developers.
when i run this code, it returns the topic fine...
$query = mysql_query("SELECT topic
FROM question
WHERE id = '$id'");
if(mysql_num_rows($query) > 0) {
$row = mysql_fetch_array($query) or die(mysql_error());
$topic = $row['topic'];
}
but when I change it to this, it doesn't run at all. why is this happening?
$query = mysql_query("SELECT topic, lock
FROM question
WHERE id = '$id'");
if(mysql_num_rows($query) > 0) {
$row = mysql_fetch_array($query) or die(mysql_error());
$topic = $row['topic'];
$lockedThread = $row['lock'];
echo "here: " . $lockedThread;
}
Hi,
I am trying to Join 2 seperate columns from 2 different sheets to make a longer column from which i can then use a Vlookup from.
Sheet1
A, B, C, D, E, F, G
Sheet2
A, B, C, D, E, F, G
I want to Join(Union) Columns B from sheet1 and C from sheet2 together and find the Distinct values of the new list. I have been working on this for weeks.
Thanks
it puts item1 down as DESC for some reason.
edit:
$sql_result = mysql_query("SELECT post, name, trip, Thread, sticky FROM (SELECT MIN(ID) AS min_id, MAX(ID) AS max_id, MAX(Date) AS max_date FROM test_posts GROUP BY Thread ) t_min_max INNER JOIN test_posts ON test_posts.ID = t_min_max.min_id WHERE Board=".$board." ORDER BY sticky ASC, max_date DESC", $db);
http://prime.programming-designs.com/test_forum/viewboard.php?board=0&page=3
I am not able to find an answer to this. Does anybody know this? I want to enable the download of .bak file and for that I need to know the mime type so that i configure the same in the IIS for .bak file.
Any help is appreciated.