Search Results

Search found 33316 results on 1333 pages for 'sql team'.

Page 665/1333 | < Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >

  • Why is doing a top(1) on an indexed column in mssql slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work?

    Read the article

  • Simple ASP function question

    - by J Harley
    Good Morning, I have got the following function: FUNCTION queryDatabaseCount(sqlStr) SET queryDatabaseCountRecordSet = databaseConnection.Execute(sqlStr) If queryDatabaseCountRecordSet.EOF Then queryDatabaseCountRecordSet.Close queryDatabaseCount = 0 Else QueryArray = queryDatabaseCountRecordSet.GetRows queryDatabaseCountRecordSet.Close queryDatabaseCount = UBound(QueryArray,2) + 1 End If END FUNCTION And the following dbConnect: SET databaseConnection = Server.CreateObject("ADODB.Connection") databaseConnection.Open "Provider=SQLOLEDB; Data Source ="&dataSource&"; Initial Catalog ="&initialCatalog&"; User Id ="&userID&"; Password="&password&"" But for some reason I get the following error: ADODB.Recordset error '800a0e78' Operation is not allowed when the object is closed. /UBS/DBMS/includes/blocks/block_databaseoverview.asp, line 30 Does anyone have any suggestions? Many Thanks, Joel

    Read the article

  • Disable animations when ending search in iPhone

    - by camilo
    Hi. A quicky: is there a way to dismiss the keyboard and the searchDisplayController without animation? I was able to do it when the user presses "Cancel", but when the user presses the black "window thingy" above search field (only visible while the user hasn't inserted any text), the animation always occurs, even when I change the delegate functions. Is there a way to control this, or as an alternative, to disable the user to end searching by pressing the black window? Thanks in advance.

    Read the article

  • Help ! How do I get the total number rows from my mssql paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my MSSQL database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my datalist. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a partcular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my datalist pager.Now I need the total number of all comments(not in the cuurent page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesnt work and raises an error.I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • How to return table name from stored procedure in dataset.

    - by Shantanu Gupta
    I used a dataset to store 15 tables that I need at the time of loading. When i filled all the tables using stored procedure it returns me all the table but name of the table doesn't comes as that of actual table name in a database. It takes all the table with table name as Table1, Table2, Table3... I want them to be with the name as they actually are in table. SELECT PK_GUEST_TYPE, [DESCRIPTION] FROM L_GUEST_TYPE SELECT PK_AGE_GROUP_ID, AGE_GROUP FROM L_AGE_GROUP SELECT PK_COMPANY_ID, COMPANY_NAME FROM M_COMPANY SELECT PK_COUNTRY_ID, COUNTRY FROM L_COUNTRY SELECT PK_EYE_COLOR_ID, [DESCRIPTION] FROM L_EYE_COLOR SELECT PK_GENDER_ID, [DESCRIPTION] FROM L_GENDER SELECT PK_HAIR_COLOR_ID, [DESCRIPTION] FROM L_HAIR_COLOR SELECT PK_STATE_PROVONCE_ID, [DESCRIPTION] FROM L_STATE_PROVINCE SELECT PK_STATUS_ID, [DESCRIPTION] FROM L_STATUS SELECT PK_TITLE_ID, [DESCRIPTION] FROM L_TITLE SELECT PK_TOWER_ID, [DESCRIPTION] FROM M_TOWER SELECT PK_CITY_ID, [DESCRIPTION] FROM L_CITY SELECT PK_REGISTER_TYPE_ID, [DESCRIPTION] FROM L_REGISTER_TYPE Here is my frontend coding to fill dataset. OpenConnection(); adp.Fill(ds); CloseConnection(true);

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

  • How can I "merge", "flatten" or "pivot" results from a query which returns multiple rows into a sing

    - by dsm
    I have a simple query over a table, which returns results like the following: id id_type id_ref 2702 5 31 2702 16 14 2702 17 3 2702 40 1 2702 26 4 And I would like to merge the results into a single row, for instance: id concatenation 2702 5,16,17,40,26:31,14,3,1,4 Is there any way to do this within a trigger? NB: I know I can use a cursor, but I would really prefer not to unless there is no better way.

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • Change column names of a cube action as they appear in Visual Studio

    - by hermann
    the title pretty much says it all. I have a cube with data in it and I have yet to find a way to change the column names. They appear in a very ugly manner like [cubeName].[$dimension.columnName]. I have tried everything I know and anything I found on the web but nothing seems to be working. What I tried to do in most cases is create an Action in the Actions tab and write some MDX query language in there. No results whatsoever. As if the action is never run. Does anyone know how to do this? I've spent about 3 days trying to figure this out. Thank you.

    Read the article

  • Select Columns Only if String length is greater than 2

    - by Zee-pro
    Similar Question may be asked but I am unable to find anything that fits my needs. How can I select only columns where string length is greater than 2 This is how much has done yet. SELECT * FROM Table1 WHERE (Table1.ID = @ID) Or something like WHERE (Table1.ID = @ID) AND (LEN(*) > 2) Thank for all of your help I have a Table, in which I have 35 columns and a User ID column, now I want to select and display information from only those columns which have 2 string. I Like to Select only columns which have 2 string and the defined ID by User not the Whole Row !! I hope I am making sense. Table Desired Result DI 35 Lesson 4 Maths Lesson 9 ICT Lesson 12 English

    Read the article

  • Java Prepared Statement Error

    - by Suresh S
    Hi Guys the following code throws me an error i have an insert statement created once and in the while loop i am dynamically setting parameter , and at the end i says ps2.addBatch() again while ( (eachLine = in.readLine()) != null)) { for (int k=stat; k <=45;k++) { ps2.setString (k,main[(k-2)]); } stat=45; for (int l=1;l<= 2; l++) { ps2.setString((stat+l),pdp[(l-1)]);// Exception } ps2.addBatch(); } This is the error java.lang.ArrayIndexOutOfBoundsException: 45 at oracle.jdbc.dbaccess.DBDataSetImpl._getDBItem(DBDataSetImpl.java:378) at oracle.jdbc.dbaccess.DBDataSetImpl._createOrGetDBItem(DBDataSetImpl.java:781) at oracle.jdbc.dbaccess.DBDataSetImpl.setBytesBindItem(DBDataSetImpl.java:2450) at oracle.jdbc.driver.OraclePreparedStatement.setItem(OraclePreparedStatement.java:1155) at oracle.jdbc.driver.OraclePreparedStatement.setString(OraclePreparedStatement.java:1572) at Processor.main(Processor.java:233)

    Read the article

  • Extracting contents of ConnectionStrings in web.config in Silverlight Business application.

    - by webKite
    I am trying to read dataSource ad Catalog from connectionString in web.config in Silverlight business project. Unfortunately when I used "SqlConnectionStringBuilder", I could not read connectionstring the has "connectionString="metadata=res:///MainDatabase.Main.csdl|res:///MainDatabase.Main.ssdl|......."" where as it work for "connectionString="Data Source=My-PC\SQL_2008;Initial Catalog =...."". I could get them using "Split" however, I don't like that solution. Is there any way to get my requirements? Thanks

    Read the article

  • A case-insensitive related implementation problem

    - by Robert
    Hi All, I am going through a final refinement posted by the client, which needs me to do a case-insesitive query. I will basically walk through how this simple program works. First of all, in my Java class, I did a fairly simple webpage parsing: title=(String)results.get("title"); doc = docBuilder.parse("http://" + server + ":" + port + "/exist/rest/db/wb/xql/media_lookup.xql?" + "&title=" + title); This Java statement references an XQuery file "media_lookup.xql" which is stored on localhost, and the only parameter we are passing is the string "title". Secondly, let's take at look at that XQuery file: $title := request:get-parameter('title',""), $mediaNodes := doc('/db/wb/portfolio/media_data.xml'), $query := $mediaNodes//media[contains(title,$title)], Then it will evaluate that query. This XQuery will get the "title" parameter that are passes from our Java class, and query the "media_data" xml file stored in the database, which contains a bunch of media nodes with a 'title' element node. As you may expect, this simple query will just match those media nodes whose 'title' element contains a substring of what the value of string 'title' is. So if our 'title' is "Chi", it will return media nodes whose title may be "Chicago" or "Chicken". The refinment request posted by the client is that there should be NO case-sensitivity. The very intuitive way is to modify the XQuery statement by using a lower-case funtion in it, like: $query := $mediaNodes//media[contains(lower-case(title/text(),lower-case($title))], However, the question comes: this modified query will run my machine into memory overflow. Since my "media_data.xml" is quite huge and contains thouands of millions of media nodes, I assume the lower-case() function will run on each of the entries, thus causing the machine to crash. I've talked with some experienced XQuery programmer, and they think I should use an index to solve this problem, and I will definitely research into that. But before that, I am just posting this problem here to get other ideas or any suggestions, do you think any other way may help? for example, could I tweak the Java parse statement to realize the case-insensitivity? Since I think I saw some people did some string concatination by using "contains." in Java before passing it to the server. Any idea or help is welcomed, thanks in advance.

    Read the article

  • How can I open a shared sub calendar in Outlook 2010?

    - by Matt Love
    There is a team in my office that has a shared calendar (the team calendar is set up as a user in Active Directory/Exchange, so treat the team as a user). The team also has 3 sub-calendars for the different team members. Other people in the office need to be able to access this team's calendar. They can go to Open Calendar in Outlook and see the main calendar, but they cannot see the sub-calendars. The sub-calendars all have the Default user permissions set to Reviewer. If you go to File → Account Settings → Change [logged in Exchange account] → More Settings → Advanced and add the team's mailbox, it does show the calendars in Outlook, but it comes up under My Calendars instead of Shared Calendars. We need to be able to go to Open Calendar and open the calendar and open all the sub-calendars this way. How is this possible?

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • What are the rules governing how a bind variable can be used in Postgres and where is this defined?

    - by Craig Miles
    I can have a table and function defined as: CREATE TABLE mytable ( mycol integer ); INSERT INTO mytable VALUES (1); CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol = l_myvar; RETURN l_myrow; END; $$ LANGUAGE plpgsql; In this case l_myvar acts as a bind variable for the value passed when I call: SELECT * FROM myfunction(1); and returns the row where mycol = 1 If I redefine the function as: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (l_myvar); RETURN l_myrow; END; $$ LANGUAGE plpgsql; SELECT * FROM myfunction(1); still returns the row where mycol = 1 However, if I now change the function definition to allow me to pass an integer array and try to this array in the IN clause, I get an error: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer[]) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (array_to_string(l_myvar, ',')); RETURN l_myrow; END; $$ LANGUAGE plpgsql; Analysis reveals that although: SELECT array_to_string(ARRAY[1, 2], ','); returns 1,2 as expected SELECT * FROM myfunction(ARRAY[1, 2]); returns the error operator does not exist: integer = text at the line: WHERE mycol IN (array_to_string(l_myvar, ',')); If I execute: SELECT * FROM mytable WHERE mycol IN (1,2); I get the expected result. Given that array_to_string(l_myvar, ',') evaluates to 1,2 as shown, why arent these statements equivalent. From the error message it is something to do with datatypes, but doesnt the IN(variable) construct appear to be behaving differently from the = variable construct? What are the rules here? I know that I could build a statement to EXECUTE, treating everything as a string, to achieve what I want to do, so I am not looking for that as a solution. I do want to understand though what is going on in this example. Is there a modification to this approach to make it work, the particular example being to pass in an array of values to build a dynamic IN clause without resorting to EXECUTE? Thanks in advance Craig

    Read the article

  • Listing issue, GROUP mysql

    - by SethCodes
    Here is a mock-up example of Mysql table: | ID | Country | City | ________________________________ | 1 | Sweden | Stockholm | | 2 | Sweden | Stockholm | | 3 | Sweden | Lund | | 4 | Sweden | Lund | | 5 | Germany | Berlin | | 6 | Germany | Berlin | | 7 | Germany | Hamburg | | 8 | Germany | Hamburg | Notice how both rows Country and city have repeated values inside them. Using GROUP BY country, city in my PDO query, the values will not repeat while in loop. Here is PDO for this: $query = "SELECT id, city, country FROM table GROUP BY country, city"; $stmt = $db->query($query); while($row = $stmt->fetch(PDO::FETCH_ASSOC)) : The above code will result in an output like this (some editing in-between). GROUP BY works but the country repeats: Sweden - Stockholm Sweden - Lund Germany - Berlin Germany - Hamburg Using bootstrap collapse and above code, I separate the country from the city with a simple drop down collopase. Here is code: <li> <a data-toggle="collapse" data-target="#<?= $row['id']; ?>" href="search.php?country=<?= $row['country']; ?>"> <?= $row['country']; ?> </a> <div id ="<?= $row['id']; ?>" class="collapse in"> //collapse div here <a href="search.php?city=<?= $row['city']; ?>"> <?= $row['city']; ?><br></a> </div> //end </li> It then looks something like this (once collapse is initiated): Sweden > Stockholm Sweden > Lund Germany >Berlin Germany >Hamburg Here is where I face the problem. The above lists the values Sweden and Germany 2 times. I want Sweden and Germany to only list one time, and the cities listed below, so the desired look is to be this: Sweden // Lists one time > Stockholm > Lund Germany // Lists one time >Berlin >Hamburg I have tried using DISTINCT, GROUP_CONTACT and other methods, yet none get my desired output (above). Suggestions? Below is my current full code in action: <? $query = "SELECT id, city, country FROM table GROUP BY country, city"; $stmt = $db->query($query); while($row = $stmt->fetch(PDO::FETCH_ASSOC)) : ?> <li> <a data-toggle="collapse" data-target="#<?= $row['id']; ?>" href="search.php?country=<?= $row['country']; ?>"> <?= $row['country']; ?> </a> <div id ="<?= $row['id']; ?>" class="collapse in"> //collapse div here <a href="search.php?city=<?= $row['city']; ?>"> <?= $row['city']; ?><br></a> </div> //end </li> <? endwhile ?>

    Read the article

  • alter mysqldump file before import

    - by julio
    Hi-- I have a mysqldump file created from an earlier version of a product that can't be imported into a new version of the product, since the db structure has changed slightly (mainly altering a column that was NOT NULL DEFAULT 0 to UNIQUE KEY DEFAULT NULL). If I just import the old dump file, it will error out since the column that has default values of 0 now breaks the UNIQUE constraint. It would be easy enough to either manually alter the mysqldump file, or import into a temp table and change it, then copy to the new table. However, is there a way to do this programatically, so it will be repeatable and not manual? (this will need to happen for many instances of this product). I'm thinking something like disabling key constraints for the import, then setting all values that = 0 to NULL, then re-enabling the key constraints? Is this possible? Any help appreciated.

    Read the article

< Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >