Search Results

Search found 27368 results on 1095 pages for 'msaccess to sql'.

Page 664/1095 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • Query for multiple joins

    - by Shailaja
    i have 3 tables named dataset,dataelem and transformdataelem with column names as below: main.Dataset ------------ datasetID (PK) applicationID main.Dataelem ------------- dataelemID(PK) datasetID(FK) dataelemname biztermID main.Transformdataelem ---------------------- OutputdataelemID InputdataelemID My requirement is: All tables are referenced. Extract all the dataelemId rows from dataelem table where applicationID of dataset table is equal to 1044 and biztermid shud be null. Then whatever resultant dataelemIDs from the above query should be matched with outputdataelemID of Transformdataelem table and we shud get the respective input dataelemId's. Again with these matched inputdataelemID's we shud get the dataelemname's from datelem table.

    Read the article

  • My update query executes but doesn't update

    - by Kindson
    I have this update query. UPDATE production_shr_01 SET total_hours = hours, total_weight = weight, percentage = total_hours / 7893.3 WHERE (status = 'X') The query executes fine but the problem is that when this query executes, it doesn't update the percentage field. What might be the problem?

    Read the article

  • Query - Trying to SUM one field based on content of another field

    - by ShaneL
    Table: DayOfWeek Enrollments Monday 35 Monday 12 Saturday 25 Tuesday 15 Monday 9 Tuesday 15 Basically I'm trying to sum the total enrolments for each day. so the Output will look like: DayOfWeek Enrollments Monday 56 Saturday 25 Tuesday 30 I've spent around 4 hours trying to work this out trying many many different ways but no luck. The problem I'm having is i can count how many enrollments for each day but can't have it aligned with the correct day when i run the query e.g. I want The total to be on the same line as the day it was calculated from. (I hope that is clear enough)

    Read the article

  • Database: Pipelined Functions

    - by Rachel
    I am new to the concept of Pipeline Functions. I have some questions regarding From Database point of view: What actually is Pipeline function ? What is the advantage of using Pipeline Function ? What challenges are solved using Pipeline Function ? Are the any optimization advantages of using Pipeline Function ? Thanks.

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • SqlCeCommand ExecuteNonQuery performance issue

    - by Michael
    I've been asked to resolve an issue with a .Net/SqlServerCe application. Specifically, after repeated inserts against the db, performance becomes increasingly degraded. In one instance at ~200 rows, in another at ~1000 rows. In the latter case the code being used looks like this: Dim cm1 As System.Data.SqlServerCe.SqlCeCommand = cn1.CreateCommand cm1.CommandText = "INSERT INTO Table1 Values(?,?,?,?,?,?,?,?,?,?,?,?,?)" For j = 0 To ds.Tables(0).Rows.Count - 1 'this is 3110 For i = 0 To 12 cm1.Parameters(tbl(i, 0)).Value = Vals(j,i) 'values taken from a different db Next cm1.ExecuteNonQuery() Next The specifics aren't super important (like what 'tbl' is, etc) but rather whether or not this code should be expected to handle this number of inserts, or if the crawl I'm witnessing is to be expected.

    Read the article

  • What is the most "database independent" way of creating a variable length text field in a database

    - by Thibaut Colar
    I want to create a text field in the database, with no specific size (it will store text of length unknown in some case) - the particular text are serialized simple object (~ JSON) What is the most database independent way to do this : - a varchar with no size specified (don't think all db support this) - a 'text' field, this seems to be common, but I don't believe it's a standard - a blob or other object of that kind ? - a varchar of a a very large size (that's inefficient and wastes disk space probably) - Other ? I'm using JDBC, but I'd like to use something that is supported in most DB (oracle, mysql, postgresql, derby, HSQL, H2 etc...) Thanks.

    Read the article

  • The best practice to setup hierarchy data

    - by eski
    I'm trying to figure out the best practice to setup hierarchy data that i take from a database into some contoller that would show the hierachy. Basicly this would look like a normal tree but when you press the items that are under "chapters" you get a link to another page. I have these tables and this is the way they are connected Period Courses Subjects Chapters I select Period from a DropDownBox and then i want all the courses in that period to line up. Under each course would be the subject and under them are the chapers, typical hierarchy. The tables are linked together with refrences to each other in linear way. I have tried to use treeview to show this, but dont understand how to do it. I though i could use <ul><il> tags and do it at runtime. Reapeter or datalist, possible ? Is it better to do this with databinding in XAML or in code ?

    Read the article

  • XML Import how would you do it?

    - by Rico
    XML is used as one of our main integration points. it comes over by many clients at a time but too many clients importing at the same time can slow down our database to a crawl. Someone has to have solved a problem like this. I am basically using VB to parse through the data and import what i want and don't want. Is there a better way?

    Read the article

  • MSSQL Server using multiple ID Numbers

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. Thanks for any help See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • Voting Script, Possiblity of Simplifying Database Queries

    - by Sev
    I have a voting script which stores the post_id and the user_id in a table, to determine whether a particular user has already voted on a post and disallow them in the future. To do that, I am doing the following 3 queries. SELECT user_id, post_id from votes_table where postid=? AND user_id=? If that returns no rows, then: UPDATE post_table set votecount = votecount-1 where post_id = ? Then SELECT votecount from post where post_id=? To display the new votecount on the web page Any better way to do this? 3 queries are seriously slowing down the user's voting experience

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • MySQL COUNT() total posts within a specific criteria?

    - by newbtophp
    Hey, I've been losing my hair trying to figure out what I'm doing wrong, let me explain abit about my MySQL structure (so you get a better understanding) before I go straight to the question. I have a simple PHP forum and I have a column in both tables (for posts and topics) named 'deleted' if it equals 0 that means its displayed (considered not deleted/exists) or if it equals 1 it hidden (considered deleted/doesn't exist) - bool/lean. Now, the 'specific criteria' I'm on about...I'm wanting to get a total post count within a specific forum using its id (forum_id), ensuring it only counts posts which are not deleted (deleted = 0) and their parent topics are not deleted either (deleted = 0). The column/table names are self explanatory (see my efforts below for them - if needed). I've tried the following (using a 'simple' JOIN): SELECT COUNT(t1.post_id) FROM forum_posts AS t1, forum_topics AS t2 WHERE t1.forum_id = '{$forum_id}' AND t1.deleted = 0 AND t1.topic_id = t2.topic_id AND t2.deleted = 0 LIMIT 1 I've also tried this (using a Subquery): SELECT COUNT(t1.post_id) FROM forum_posts AS t1 WHERE t1.forum_id = '{$forum_id}' AND t1.deleted = 0 AND (SELECT deleted FROM forum_topics WHERE topic_id = t1.topic_id) = 0 LIMIT 1 But both don't comply with the specific criteria. Appreciate all help! :)

    Read the article

  • Results from two queries at once in sqlite?

    - by SF.
    I'm currently trying to optimize the sluggish process of retrieving a page of log entries from the SQLite database. I noticed I almost always retrieve next entries along with count of available entries: SELECT time, level, type, text FROM Logs WHERE level IN (%s) ORDER BY time DESC, id DESC LIMIT LOG_REQ_LINES OFFSET %d* LOG_REQ_LINES ; together with total count of records that can match current query: SELECT count(*) FROM Logs WHERE level IN (%s); (for a display "page n of m") I wonder, if I could concatenate the two queries, and ask them both in one sqlite3_exec() simply concatenating the query string. How should my callback function look then? Can I distinguish between the different types of data by argc? What other optimizations would you suggest?

    Read the article

  • Mutiple FK columns all pointing to the same parent table - a good idea?

    - by Randy Minder
    For those of you who live and breath database design, have you ever found compelling reasons to have multiple FK's in a table that all point to the same parent table? We recently had to deal with a situation where we had a table that contained six columns which were all FK columns to the same parent table. We're debating whether this indicates a poor design on our part or whether this is more common than we think. Thanks very much.

    Read the article

  • How to hide folder in SSRS Report Builder?

    - by tnafoo
    When I click File - Open on Report Builder, I can see a list of folders under Report Server Home root folder. But I don't want end-user to see any of the folders under root unless I grant them access. I tried hiding and removing permission on the folders but they are still visible in the root folder.

    Read the article

  • update record only works when there is no auto_increment

    - by every_answer_gets_a_point
    i am accessing a mysql table through an odbc connection in excel here is how i am updating the table: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With when the schema of the table is this, updating it works: create table batchinfo(datapath text,analysistime text,reporttime text,lastcalib text,analystname text, reportname text, batchstate text, instrument text); but when i have auto_increment in there it does not work: CREATE TABLE batchinfo ( rowid int(11) NOT NULL AUTO_INCREMENT, datapath text, analysistime text, reporttime text, lastcalib text, analystname text, reportname text, batchstate text, instrument text, PRIMARY KEY (rowid) ) ENGINE=InnoDB AUTO_INCREMENT=67 DEFAULT CHARSET=latin1 has anyone experienced a problem like this where updating does not work when there is an auto_increment field involved? connection string: Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub also here's the rs.open: rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable

    Read the article

  • MSSQL. Compare columns in two tables.

    - by maxt3r
    Hi, i've recently done a migration from a really old version of some application to the current version and i faced some problems while migrating databases. I need a query that could help me to compare columns in two tables. I mean not the data in rows, i need to compare the columns itself to figure out, what changes in table structure i've missed.

    Read the article

  • how to know on which column,the sequence is applied?

    - by Vineet
    I have to fetch all sequences with their table name along with the column name on which sequence is applied .Some how i managed to fetch table name corresponding to sequence because in my data base sequence is stored with first name as table name from data dictionary(all_sequences and all_tables) . Please let me know how to fetch corresponding column name also if possible!!

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >