Search Results

Search found 27337 results on 1094 pages for 'sql and the like'.

Page 618/1094 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • how to get external variable value in dtsx package.

    - by Rishabh
    Hi, I am executing .dtsx package from c#, it was executing fine, if i am passing one variable value from c# code then how can i get it on .dtsx package for my ole db source query. Here is my c# code. string file = @"D:\CYNCZFuzzy\CYNCZFuzzy\Contact.dtsx"; package = app.LoadPackage(file, null); Variables vars = package.Variables; vars["User::parentContactID"].Value = 1028203; pkgResults = package.Execute(); string result = pkgResults.ToString(); I need this 1028203 value on my ole db source query, here my query. select cr.MasterContactID as ParentContactID, c.ID,C.FirstName, C.MiddleName, c.LastName, c.ID as FieldID from Contact c inner join ContactRelation cr on cr.SlaveContactID = c.ID where RelationshipID = 1 AND cr.MasterContactID = ? what I should write on ? for getting 1028203 value from c# page. Thanks in advance...

    Read the article

  • When should I consider representing the primary-key ...?

    - by JMSA
    When should I consider representing the primary-key as classes? Should we only represent primary keys as classes when a table uses composite-key? For example: public class PrimaryKey { ... ... ...} Then private PrimaryKey _parentID; public PrimaryKey ParentID { get { return _parentID; } set { _parentID = value; } } And public void Delete(PrimaryKey id) {...} When should I consider storing data as comma-separated values in a column in a DB table rather than storing them in different columns?

    Read the article

  • java.sql.SQLException: Parameter index out of range (3 > number of parameters, which is 2)

    - by sam
    @WebMethod(operationName = "SearchOR") public SearchOR getSearchOR (@WebParam(name = "comp") String comp, @WebParam(name = "name") String name) { //TODO write your implementation code here: SearchOR ack = null; try{ String simpleProc = "{ call getuser_info_or(?,?)}"; CallableStatement cs = con.prepareCall(simpleProc); cs.setString(1, comp); cs.setString(2, name); ResultSet rs = cs.executeQuery(); System.out.print("2"); /* int i = 0, j = 0; if (rs.last()) { i = rs.getRow(); ack = new SearchOR[i]; rs.beforeFirst(); }*/ while (rs.next()) { // ack[j] = new SearchOR(rs.getString(1), rs.getString(2)); // j++; ve.add(rs.getString(1)); ve.add(rs.getString(2)); }}catch ( Exception e) { e.printStackTrace(); System.out.print(e); } return ack; } I am getting error at portion i have made bold.It is pointing to that location.My Query is here: DELIMITER $$ DROP PROCEDURE IF EXISTS discoverdb.getuser_info_or$$ MySQL returned an empty result set (i.e. zero rows). CREATE PROCEDURE discoverdb.getuser_info_or ( IN comp VARCHAR(100), IN name VARCHAR(100), OUT Login VARCHAR(100), OUT email VARCHAR(100) ) BEGIN SELECT sLogin, sEmail INTO Login, email FROM ad_user WHERE company = comp OR sName=name; END $$ MySQL returned an empty result set (i.e. zero rows). DELIMITER ;

    Read the article

  • How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Linq to Sql: Update Entity throug a new Object

    - by Dänu
    Hey Guys I'd like to update an entity via linq, but since I edit the entity in a view after serializing it, I don't have direct access to the entity inside the data context. I could do it like this: entity.property1 = obj.property1; entity.property2 = obj.property2; ... thats not cool... not cool at all. Next thing I tried is to do it via .attach() like so: context.Table.attach(entity, obj); doesn't work either. So is there another option short of reflection?

    Read the article

  • What is the best design for these data base tables?

    - by Mohammed Jamal
    I need to find the best solution to make the DB Normalized with large amount of data expected. My site has a Table Tags (contain key word,id) and also 4 types of data related to this tags table like(articles,resources,jobs,...). The big question is:- for the relation with tags what best solution for optimazaion & query speed? make a table for each relation like: table articlesToTags(ArticleID,TagID) table jobsToTags(jobid,tagid) etc. or put it all in one table like table tagsrelation(tagid,itemid,itemtype) I need your help. Please provide me with articles to help me in this design consider that in future the site can conation new section relate to tag Thanks

    Read the article

  • Is SELECT INTO able to affect data from its original table during UPDATE

    - by driveby
    Whilst asking this question asp.net scheduling timed events user murph posted some insightful information: Point about this is that its very, very simple - you have an process for exchange that is performing a clearly defined task and you have a high frequency task that is not doing anything particularly complex, its a straightforward query (select from table where sent = false and send at < value) - probably into temporary table so that you can run a single query update after you've done the sends - that you can optimise the index for. You're not trying to queue up a huge pile of event triggers, just one that fires once a minute and processes things that are due. Is it possible to SELECT data from table X INTO table Y and have the UPDATES that are performed on table Y pushed into table X? I guess the alternative would be that the data gets updated in table Y then an update command can be run on table X based on the data in table Y. What would be the advantage of selecting into another table? Thank you,

    Read the article

  • MYSQL DELETE from a table [closed]

    - by Hossein
    Possible Duplicate: MySQL DELETE in a single table Hi, I have this table: userurltag(id,user,Url,tag) I want to remove rows that contain urls that are used by only one user, can someone help me? It seems that DELETE...(SELECT...) is not supported in Mysql.

    Read the article

  • MySQL break out group clause from subquery

    - by Anton Gildebrand
    Here is my query SELECT COALESCE(js.name,'Lead saknas'), count(j.id) FROM jobs j LEFT JOIN job_sources js ON j.job_source=js.id LEFT JOIN (SELECT * FROM quotes GROUP BY job_id) q ON j.id=q.job_id GROUP BY j.job_source The problem is that it's allowed for each job to have more than one quote. Because of that i group the quotes by job_id. Now sure, this works. But i don't like the solution with a subquery. How can i break out the group clause from the subquery to the main query? I have tried to add q.job_id to the main group clause, both before and after the existing one but don't get the same results.

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • MySQL Query: Winning Auction Bid

    - by mabwi
    I have a small Bidding system that I'm using for a fantasy auction draft. I'm trying to use the below query to pull up the max bids on each player. However, it's not actually giving me the max bid, it's just giving me the first one entered in to the database. SELECT Bid.id FROM bids AS Bid WHERE Bid.active =1 GROUP BY player_id HAVING MAX( Bid.amount ) Here's the Bid table layout, in case it helps: CREATE TABLE IF NOT EXISTS `bids` ( `id` int(10) NOT NULL AUTO_INCREMENT, `user_id` int(10) NOT NULL, `player_id` int(10) NOT NULL, `amount` int(6) NOT NULL, `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `winning_bid` int(1) NOT NULL DEFAULT '0', `active` int(1) NOT NULL DEFAULT '1', PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 ;

    Read the article

  • Detecting abuse for post rating system

    - by Steven smethurst
    I am using a wordpress plugin called "GD Star Rating" to allow my users to vote on stories that I post to one of my websites. http://everydayfiction.com/ Recently we have been having a lot of abuse of the system. Stories that have obviously been voted up artificially. "GD Star Rating" creates some detailed logs when a user votes on a story. Including; IP, Time of vote, and user_adgent, ect.. For example this story has 181 votes with an average of 5.7 http://www.everydayfiction.com/snowman-by-shaun-simon/ Most other stories only get around ~40 votes each day. At first I thought that the story got on to a social bookmarking site Digg, Stumbleupon ect... but after checking the logs I found that this story is getting the same amount of traffic that a normal story gets ~2k-3k. I checked if all the votes for this perpendicular story where coming from a the same IP address. I could see this happening if a user was at a school's computer lab using all their lab computers to vote up this story. Not one duplicate IP address in the log for this story. SELECT ip, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY (ip ) ORDER BY count DESC Next I thought that a use might be using a proxy to vote up a story. I checked this by grouping all the browser user_agent together to see if there a single browser voting in a perpendicular way. At most 7 users where using a similar browser but voted sporadically (1-5), no evidence of wrong doing. SELECT user_agent, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY ( user_agent) ORDER BY count DESC I check was to see if all the votes came in at a once. Maybe someone has a really interesting bot that can change the user_adgent and uses proxies, ect... At most 5 votes came with in 2 mins of each other. It doesn't seem to be any regularity on how people vote (IE a 5 vote does not come in once a min) SELECT * FROM wp_gdsr_votes_log WHERE id =3932 AND vote=5 ORDER BY wp_gdsr_votes_log.voted DESC The obvious solution to this problem is to force people to login before they are allowed to vote. But I would prefer to not have to go down that route unless it is absolutely necessary. I'm looking for suggestions on things to test for to detect the abuse.

    Read the article

  • Linq ChangeConflictException (ASP.NET MVC) - occurs when submitting DataContext changes

    - by Alex
    System.Data.Linq.ChangeConflictException: 2 of X updates failed. at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at PROJECT.Controllers.HomeController.ClickProc(Int32 id, String code, String n) This is what I get very often. This action is done thousands of times a day, and I get this exception about once every 5 seconds. From what I understand it happens when something changes in the database in the period between creating DataContext and updating it. Am I right? What's the problem here and how can I fix it?

    Read the article

  • Group by date range on weeks/months interval

    - by khelll
    I'm using MySQL and I have the following table: | clicks | int | | day | date | I want to be able to generate reports like this, where periods are done in the last 4 weeks: | period | clicks | | 1/7 - 7/5 | 1000 | | 25/6 - 31/7 | .... | | 18/6 - 24/6 | .... | | 12/6 - 18/6 | .... | or in the last 3 months: | period | clicks | | July | .... | | June | .... | | April | .... | Any ideas how to make select queries that can generate the equivalent date range and clicks count?

    Read the article

  • Does clustered index on foreign key column increase join performance vs non-clustered ?

    - by alpav
    In many places it's recommended that clustered indexes are better utilized when used to select range of rows using BETWEEN statement. When I select joining by foreign key field in such a way that this clustered index is used, I guess, that clusterization should help too because range of rows is being selected even though they all have same clustered key value and BETWEEN is not used. Considering that I care only about that one select with join and nothing else, am I wrong with my guess ?

    Read the article

  • Assign values from same table

    - by Reddy S R
    I have a database table with parent child relationships between different rows. 1 parent can have any number of children. Children do not have children. I want to copy 'Message' from 'Parent Category' to child categories. CategoryID Name Value Message ParentID DeptId 1 Books 9 Specials 1 2 Music 7 1 3 Paperback 25 1 1 4 PDFs 26 1 2 5 CDs 35 2 1 If that was sample data, Paperback should have Specials as it's Message after the query is run. I have gotten the child rows (the query runs very slow, don't know why), but how do I get the data and assign it to appropriate child rows? --@DeptId = 1 select * from Categories where ParentID in( select CategoryID from Categories where DeptID = @DeptId ) I would like to see a solution that would not use cursors. Thanks

    Read the article

  • Sub reports find the sub total and grand total of each sub report in the main report

    - by sonia
    i want to find the grand total from sub report subtotal. i have three subreports. 1. itemreport 2.laborreport 3. machine report. i have find total of that reports using shared variable. like using formula: shared numbervar totalitem=sum({storedprocedrename.columnname}) i have done dis in all the sbreports. now i m want the grand total of all these. i have written the formula in main report is: shared numbervar totalitem; // same variable used in subreport item shared numbervar labtotal; // same variable sed in subreport labor shared numbervar machinetotal; // same variable used in subreport machine numbervar total; total=totalitem+labtotal+machinetotal; total but it is not giving correct result.. it is not giving result in correct format plz tell me code of main report in detail.. thanks

    Read the article

  • How to add active directory with sharepoint in the Virutal server ?

    - by pointlesspolitics
    I have a main server with windows server 2008 with active directory installed. Additionally, I have created the hyper-v virtual server with MOSS 2007 installed with dynamic ip address. I can access the sharepoint site as an intranet. How can I assign the access of all the active directory users and their profile to MOSS without adding them up manually ? If I am missing any information to provide please mention.

    Read the article

  • LINQ2SQL: how to merge two columns from the same table into a single list

    - by TomL
    this is probably a simple question, but I'm only a beginner so... Suppose I have a table containing home-work locations (cities) certain people use. Something like: ID(int), namePerson(string), homeLocation(string), workLocation(string) where homeLocation and workLocation can both be null. Now I want all the different locations that are used merged into a single list. Something like: var homeLocation = from hm in Places where hm.Home != null select hm.Home; var workLocation = from wk in Places where wk.Work != null select wk.Work; List<string> locationList = new List<string>(); locationList = homeLocation.Distinct().ToList<string>(); locationList.AddRange(workLocation.Distinct().ToList<string>()); (which I guess would still allow duplicates if they have the same value in both columns, which I don't really want...) My question: how this be put into a single LINQ statement? Thanks in advance for your help!

    Read the article

  • Best performance approach to history mechanism?

    - by Royi Namir
    We are going to create History Mechanism for our changes in DB (DART in pic) via Triggers. we have 600 tables. Each record that will be changed - the trigger will insert the deleted one into XXX. regarding to the XXX : option 1 : clone each table in "Dart" DB and each table now will have a "sister table" e.g. : Table1 will have Table1_History problems : we will have 1200 tables programmer can do mistakes by working on wrong tables... option 2 : make a new DB (DART_2005 in pic) and the history tables will be there option 3 : use linked server which stores the Db which will contain the history tables. question : 1) which option gives the best performance ( I guess 3 is not - but is it 1 or 2 or same ?) 2) Does option 2 is acting like "linked server" ( in queries we will need to select from both DB's...) 3) What is the best practice approach ?

    Read the article

  • Select where and where not

    - by Simon
    I have a table containing lessons that I called "cours" (french) and I have several cours inside and I have linked them to students with a table between them to see if they go to the lessons or not. I would like to return data with the SELECT and the data that are NOT select. So, If one student follow 3 courses of 5, I would like to return the 3 courses that he follow and the 2 courses that he doesn't follow. Is there a way to do it ?

    Read the article

  • GridView will not update underlying data source

    - by John Christensen
    So I'm been pounding on this problem all day. I've got a LinqDataSource that points to my model and a GridView that consumes it. When I attempt to do an update on the GridView, it does not update the underlying data source. I thought it might have to do with the LinqDataSource, so I added a SqlDataSource and the same thing happens. The aspx is as follows (the code-behind page is empty): <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="Data Source=devsql32;Initial Catalog=Steam;Persist Security Info=True;" ProviderName="System.Data.SqlClient" SelectCommand="SELECT [LangID], [Code], [Name] FROM [Languages]" UpdateCommand="UPDATE [Languages] SET [Code]=@Code WHERE [LangID]=@LangId"> </asp:SqlDataSource> <asp:GridView ID="_languageGridView" runat="server" AllowPaging="True" AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="LangId" DataSourceID="SqlDataSource1"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="LangId" HeaderText="Id" ReadOnly="True" /> <asp:BoundField DataField="Code" HeaderText="Code" /> <asp:BoundField DataField="Name" HeaderText="Name" /> </Columns> </asp:GridView> <asp:LinqDataSource ID="_languageDataSource" ContextTypeName="GeneseeSurvey.SteamDatabaseDataContext" runat="server" TableName="Languages" EnableInsert="True" EnableUpdate="true" EnableDelete="true"> </asp:LinqDataSource> What in the world am I missing here? This problem is driving me insane.

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >