Search Results

Search found 28930 results on 1158 pages for 'sql ce'.

Page 616/1158 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • Where Not In OR Except simulation of SQL in LINQ to Objects

    - by Thinking
    Suppose I have two lists that holds the list of source file names and destination file names respectively. The Sourcefilenamelist has files as 1.txt, 2.txt,3.txt, 4.txt while the Destinaitonlist has 1.txt,2.txt. I ned to write a linq query to find out which files are in SourceList that are absent in DestinationFile list. e.g. here the out put will be 3.txt and 4.txt. I have done this by a foreach statement. but now I want to do the same by using LINQ(C#). Edit: My Code is List<FileList> sourceFileNames = new List<FileList>(); sourceFileNames.Add(new FileList { fileNames = "1.txt" }); sourceFileNames.Add(new FileList { fileNames = "2.txt" }); sourceFileNames.Add(new FileList { fileNames = "3.txt" }); sourceFileNames.Add(new FileList { fileNames = "4.txt" }); List<FileList> destinationFileNames = new List<FileList>(); destinationFileNames.Add(new FileList { fileNames = "1.txt" }); destinationFileNames.Add(new FileList { fileNames = "2.txt" }); IEnumerable<FileList> except = sourceFileNames.Except(destinationFileNames); And Filelist is a simple class with only one property fileNames of type string.

    Read the article

  • Active Record/ORM vs Normal Forms?

    - by Arsenal
    Hello, I've been playing around with Active Record a bit, and I have noticed that A.C./ORM always uses the following database model when creating a one-to-one relationship Person id | country_id | name | ... Country id | tld | name | ... No I wondered, isn't this a violiation of the third Normal Form? This clearly states "Every non-prime attribute is non-transitively dependent on every key of the table". Well this country_id isn't dependent of personid is it? So is this wrong or am I just not getting the point?

    Read the article

  • Table names, and loop to describe

    - by Greg
    Working in Oracle 10g. Easy way to list all tables names (select table_name from dba_tables where owner = 'me') But now that I have the table names, is there an easy way to loop through them and do a 'describe' on each one in sequence?

    Read the article

  • Multiple rows update trigger

    - by sony
    If I have a statement that updates multiple rows, only the trigger will fire only on the first or last row that is being updated (not sure which one). I need to make a trigger that fires for ALL the records that are being updated into a particular table

    Read the article

  • RDLC item width is dynamic and causing extra pages to be generated (image included)?

    - by Paul Mendoza
    I'm trying to format an RDLC report file in Visual Studio 2008 and I am having a formatting issue. I have a list at the bottom that contains a matrix that expands horizontally to the right. That pink box is just to visualize the problem I'm having. When the report is rendered the matrix expands and instead of filling the pink box with the matrix is pushes the space in the pink box to the right resulting in an extra page when printing the reports. One solution would be to shrink the pink box to be the size of the matrix which I've done. But then when the matrix grows the fields at the top of the report get pushed to the right by the same amount as the growth of the matrix. Can someone please let me know what they think the solution would be? Thank you!

    Read the article

  • (N)Hibernate: deleting orphaned ternary association rows when either associated row is deleted.

    - by anthony
    I have a ternary association table created using the following mapping: <map name="Associations" table="FooToBar"> <key column="Foo_id"/> <index-many-to-many class="Bar" column="Bar_id"/> <element column="AssociationValue" /> </map> I have 3 tables, Foo, Bar, and FooToBar. When I delete a row from the Foo table, the associated row (or rows) in FooToBar is automatically deleted. This is good. When I delete a row from the Bar table, the associated row (or rows) in FooToBar remain, with a stale reference to a Bar id that no longer exists. This is bad. How can I modify my hbm.xml to remove stale FooToBar rows when deleting from the Bar table?

    Read the article

  • Cannot call scalar-valued CLR UDF from select ... from table statement

    - by Henrik B
    I have created a scalar-valued CLR UDF (user defined function). It takes a timezone id and a datetime and returns the datetime converted to that timezone. I can call it from a simple select without problems: "select dbo.udfConvert('Romance Standard Time', @datetime)" (@datetime is of course a valid datetime variable) But if I call it passing in a datetime from a table it fails: "select dbo.udfConvert('Romance Standard Time', StartTime) from sometable" (column StartTime is of course a column of type datetime) The error message is: "Cannot find either column "dbo" or the user-defined function or aggregate "dbo.udfConvert", or the name is ambiguous." This message is really for beginners that has misspelled something, but as it works in one case and not in the other, I don't think I have done any misspellings. Any ideas?

    Read the article

  • Acceessing some aggregate functions in a linq datasource in a GridView

    - by Stephen Pellicer
    I am working on a traditional WebForms project. In the project I am trying out some Linq datasources with plans to eventually migrate to an MVC architecture. I am still very new to Linq. I have a GridView using a Linq datasource. The entities I am showing has a to many relationship and I would like to get the maximum value of a column in the many side of the relationship. I can show properties of the base entity in the gridview: <asp:TemplateField HeaderText="Number" SortExpression="tJobBase.tJob.JobNumber"> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("tJobBase.tJob.JobNumber") %>'> </asp:Label> </ItemTemplate> </asp:TemplateField> I can also show the count of the many related property: <asp:TemplateField HeaderText="Number" SortExpression="tJobBase.tJob.tHourlies.Count"> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("tJobBase.tJob.tHourlies.Count") %>'> </asp:Label> </ItemTemplate> </asp:TemplateField> Is there a way to get the max value of a column called WeekEnding in the tHourlies collection to show in the GridView?

    Read the article

  • Index on column with only 2 distinct values

    - by Will
    I am wondering about the performance of this index: I have an "Invalid" varchar(1) column that has 2 values: NULL or 'Y' I have an index on (invalid), as well as (invalid, last_validated) Last_validated is a datetime (this is used for a unrelated SELECT query) I am flagging a small amount of items (1-5%) of rows in the table with this as 'to be deleted'. This is so when i DELETE FROM items WHERE invalid='Y' it does not perform a full table scan for the invalid items. A problem seems to be, the actual DELETE is quite slow now, possibly because all the indexes are being removed as they are deleted. Would a bitmap index provide better performance for this? or perhaps no index at all?

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Freebase Query with "JOINS"

    - by codemonkey
    ... Yeah, yeah, I know traditional joins don't exist. I actually like the freebase query methodology in theory, just having a little trouble getting it to actually work for me : ) Anyone have a dumb-simple example of getting Freebase data via MQL that pulls from two different "tables"? In particular, I'm trying to get automotive data... so for example, pulling fields from both /automotive/model_year and /automotive/trim_year. I've read the documentation (for hours actually). There's a distinct possibility that I'm looking right at such an example somewhere and just not seeing it because my OLTP brain just doesn't comprehend what it's seeing. * Note * ... that the two "types" I'm working with above are siblings, not parent/child. Does freebase even allow joining data between sibling nodes... I see examples of queries pulling from parent/child, but not from siblings I don't think (or I've overlooked them).

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • How can I alter a temp table?

    - by William
    I need to create a temp table, than add a new int NOT NULL AUTO_INCREMENT field to it so I can use the new field as a row number. Whats wrong with my query? SELECT post, newid FROM ((SELECT post`test_posts`) temp ALTER TABLE temp ADD COLUMN newid int NOT NULL AUTO_INCREMENT)

    Read the article

  • Best Way to Generate Unique and consecutives numbers in Oracle

    - by RRUZ
    I need to generate unique and consecutive numbers (for use on an invoice), in a fast and reliable way. currently use a Oracle sequence, but in some cases generated numbers are not consecutive because of exceptions that may occur. I thought a couple of solutions to manage this problem, but neither of they convincing me. What solution do you recommend? Use a select max () SELECT MAX (NVL (doc_num, 0)) +1 FROM invoices Use a table to store the last number generated for the invoice. UPDATE docs_numbers SET last_invoice = last_invoice + 1 Another Solution?

    Read the article

  • Can't find which row is causing conversion error

    - by Marwan
    I have the following table: CREATE TABLE [dbo].[Accounts1]( [AccountId] [nvarchar](50) NULL, [ExpiryDate] [nvarchar](50) NULL ) I am trying to convert nvarchar to datetime using this query: select convert(datetime, expirydate) from accounts I get this error: Conversion failed when converting datetime from character string. The status bar says "2390 rows". I go to rows 2390, 2391 and 2392. There is nothing wrong with the data there. I even try to convert those particular rows and it works. How can I find out which row(s) is causing the conversion error?

    Read the article

  • Non-Access or -base QBE tool?

    - by idbin.cath0br0
    Hello. I'm currently looking for a QBE tool that can execute queries on PostgreSQL or MySQL. OS doesn't really matter. Reason is that we've got to do QBE at school but I don't want to use neither Microsoft Access nor OpenOffice.org Base (lack of features). Any help would be appreciated.

    Read the article

  • Advice Needed To Normalise Database

    - by c11ada
    hey all, im trying to create a database for a feedback application in ASP.net i have the following database design. Username (PK) QuestionNo (PK) QuestionText FeedbackNo (PK) Username UserFeedbackNo (PK) FeedbackNo (FK) QuestionNo (FK) Answer Comment a user has a unique username a user can have multiple feedbacks i was wondering if the database design i have here is normalised and suitable for the application

    Read the article

  • LINQ Query returns nothing.

    - by gtas
    Why is this query returns 0 lines? There is a record matching the arguments. Deafkaw.Where(p => (p.ImerominiaKataxorisis >= aDate && p.ImerominiaKataxorisis <= DateTime.Now) && (p.Year == etos && p.IsYpodeigma == false) ).ToList(); Am i missing something?

    Read the article

  • max count with joins

    - by trixet
    I have 3 tables: users: Id Login 1 John 2 Bill 3 Jim computers: Id Name 1 Computer1 2 Computer2 3 Computer3 4 Computer4 5 Computer5 sessions: UserId ComputerId Minutes 1 2 47 2 1 32 1 4 15 2 5 5 1 2 7 1 1 40 2 5 31 I would like to display this resulting table: Login Total_sess Total_min Most_freq_computer Sess_on_most_freq Min_on_most_freq John 4 109 Computer2 2 54 Bill 3 68 Computer5 2 36 Jim - - - - - Myself I can only cover first 3 columns with: SELECT Login, COUNT(sessions.UserId), SUM(Minutes) FROM users LEFT JOIN sessions ON users.Id = sessions.UserId GROUP BY users.Id And some kind of other columns with: SELECT main.* FROM (SELECT UserId, ComputerId, COUNT(*) AS cnt ,SUM(Minutes) FROM sessions GROUP BY UserId, ComputerId) AS main INNER JOIN ( SELECT ComputerId, MAX(cnt) AS maxCnt FROM ( SELECT ComputerId, UserId, COUNT(*) AS cnt FROM sessions GROUP BY ComputerId, UserId ) AS Counts GROUP BY ComputerId) AS maxes ON main.ComputerId = maxes.ComputerId AND main.cnt = maxes.maxCnt But I need to get whole resulting table in one query. I feel I'm doing something completely wrong. Need help.

    Read the article

  • Can't add domain users to Reporting Services 2008

    - by Jeremy
    I have SSRS 2008 setup on the database server. The server is part of the domain. Reporting Services is running under NetworkService. When I try to add a domain user using the web interface (Site Settings -- Security -- New Role Assignment), the page posts back but the user is not in the list. The server's log file contains the following Unhandled Exception: ui!ReportManager_0-1!954!01/12/2009-10:14:52:: Unhandled exception: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess) at System.Security.Principal.SecurityIdentifier.Translate(Type targetType) at System.Security.Principal.WindowsIdentity.GetName() at System.Security.Principal.WindowsIdentity.get_Name() at ReportingServicesHttpRuntime.RsWorkerRequest.GetServerVariable(String name) at System.Web.Security.WindowsAuthenticationModule.OnEnter(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Any one have an idea on how to fix this?

    Read the article

  • Problems getting foreign keys working in MySQL

    - by thehuby
    I've been trying to get a delete to cascade and it just doesn't seem to work. I'm sure I am missing something obvious, can anyone help me find it? I would expect a delete on the 'articles' table to trigger a delete on the corresponding rows in the 'article_section_lt' table. CREATE TABLE articles ( id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL UNIQUE, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT, date DATE NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP )ENGINE=INNODB; CREATE TABLE article_sections ( /* blog, news etc */ id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT NOT NULL DEFAULT "" )ENGINE=INNODB; CREATE TABLE article_section_lt ( fk_article_id INTEGER UNSIGNED NOT NULL REFERENCES articles(id) ON DELETE CASCADE, fk_article_section_id INTEGER UNSIGNED NOT NULL )ENGINE=INNODB;

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >