Search Results

Search found 6017 results on 241 pages for 'universal records managem'.

Page 96/241 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Guidelines for using Merge task in SSIS

    - by thursdaysgeek
    I have a table with three fields, one an identity field, and I need to add some new records from a source that has the other two fields. I'm using SSIS, and I think I should use the merge tool, because one of the sources is not in the local database. But, I'm confused by the merge tool and the proper process. I have my one source (an Oracle table), and I get two fields, well_id and well_name, with a sort after, sorting by well_id. I have the destination table (sql server), and I'm also using that as a source. It has three fields: well_key (identity field), well_id, and well_name, and I then have a sort task, sorting on well_id. Both of those are input to my merge task. I was going to output to a temporary table, and then somehow get the new records back into the sql server table. Oracle Well SQL Well | | V V Sort Source Sort Well | | -------> Merge* <----------- | V Temp well table I suspect this isn't the best way to use this tool, however. What are the proper steps for a merge like this? One of my reasons for questioning this method is that my merge has an error, telling me that the "Merge Input 2" must be sorted, but its source is a sort task, so it IS sorted. Example data SQL Well (before merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d Oracle Well well_id well_name 123 well k 292 well c 311 well y 344 well t 439 well d 532 well j SQL Well (after merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d 6 311 well y 7 532 well j Would it be better to load my Oracle Well to a temporary local file, and then just use a sql insert statment on it?

    Read the article

  • cascading combo box causing empty fields in next record

    - by glinch
    Hi there, I'm having problems with a cascading combo box. Everything works fine with the combo boxes and the values get populated correctly. Private Sub cmbAdjComp_AfterUpdate() Me.cboAdjOff.RowSource = "SELECT AdjusterCompanyOffice.ID, AdjusterCompanyOffice.Address1, AdjusterCompanyOffice.Address2, AdjusterCompanyOffice.Address3, AdjusterCompanyOffice.Address4, AdjusterCompanyOffice.Address5 FROM" & _ " AdjusterCompanyOffice WHERE AdjusterCompanyOffice.AdjCompID = " & Me.cmbAdjComp.Column(1) & _ " ORDER BY AdjusterCompanyOffice.Address1" Me.cboAdjOff = Me.cboAdjOff.ItemData(0) End Sub The secondary combo box has a row source query: SELECT AdjusterCompanyOffice.ID, AdjusterCompanyOffice.Address1, AdjusterCompanyOffice.Address2, AdjusterCompanyOffice.Address3, AdjusterCompanyOffice.Address4, AdjusterCompanyOffice.Address5 FROM AdjusterCompanyOffice ORDER BY AdjusterCompanyOffice.Address1; Both comboboxes have the same controlsource. Everything works fine and dandy moving between records and the boxes show the correct fields for each record. When i use the first combo box, and then select the appropriate option in the second combo box, everything works great on the specific record. However when I move to the next record, the values in the second combo box are all empty. If i close the form and reopen it, and avoid using the cascading combo boxes all the values are all correct when i move between records. Somehow using the cascading combo boxes creates a conflict with the row source of the secondary combo box. Hope that is clear! Have been rummaging around for an answer but cant find anything. any help would be greatly appreciated. Thanks Noel

    Read the article

  • Cannot rollback ransaction with Entity Framework

    - by Luca
    I have to do queries on uncommitted changes and I tried to use transactions, but I found that it do not work if there are exceptions. I made a simple example to reproduce the problem. I have a database with only one table called "Tabella" and the table has two fields: "ID" is a autogenerated integer, and "Valore" is an integer with a Unique constraint. Then I try to run this code: using (TransactionScope scope = new TransactionScope()) { Db1Container db1 = new Db1Container(); try { db1.AddToTabella(new Tabella() { Valore = 1 }); db1.SaveChanges(); } catch { } try { db1.AddToTabella(new Tabella() { Valore = 1 }); db1.SaveChanges(); //Unique constraint is violated here and an exception is thrown } catch { } try { db1.AddToTabella(new Tabella() { Valore = 2 }); db1.SaveChanges(); } catch { } //scope.Complete(); //NEVER called } //here everything should be rolled back Now if I look into the database it should contain no records because the transaction should rollback, instead I find two records!!!! One with Valore=1 and one with Valore=2. I am missing something? It looks like the second call to SaveChanges method rollback its own changes and "deletes" the transaction, then the third call to SaveChanges commits the changes of the first and the third insert (at this point it is like the transaction not exists). I also tried to use SaveChanges(false) method (even without calling AcceptAllChanges method), but with no success: I have the same behaviour. I do not want the transaction to be rolled back automatically by SaveChanges, because I want to correct the errors (for example by user interaction in the catch statement) and make a retry. Can someone help me with this? It seems like a "bug", and it is giving me a really big headache...

    Read the article

  • YUI DataTable with JSON and client side filtering Data Error

    - by user316574
    Hi, I don't understand what I'm doing wrong here ! I keep getting a Data Error. But I validated the JSON and it's ok... Here is the javascript from the YUI Datatble example (slightly modified). <div class="markup"> <label for="filter">Filter by state:</label> <input type="text" id="filter" value=""> <div id="tbl"></div> -- YAHOO.util.Event.addListener(window, "load", function() { //var Ex = YAHOO.namespace('example'); var dataSource = new YAHOO.util.DataSource("jsondb/json_meta_proxy.html",{ responseType : YAHOO.util.DataSource.TYPE_JSON, responseSchema : { resultsList: "records", fields: [ {key:"idprojet"}, {key:"nomprojet"} ], metaFields: { totalRecords: "totalRecords" } }, doBeforeCallback : function (req,raw,res,cb) { // This is the filter function var data = res.results || [], filtered = [], i,l; if (req) { req = req.toLowerCase(); for (i = 0, l = data.length; i and here is the JSON data in the file "jsondb/json_meta_proxy.html" { "recordsReturned": 1, "totalRecords": 1, "startIndex": 0, "sort": "idprojet", "dir": "asc", "records": [ { "idprojet": "11256", "nomprojet": "" } ] } Many thanks for your help !!!

    Read the article

  • Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script

    - by J Wynia
    I've got a DLL that contains Subsonic-generated and augmented code to access a data model. Actually, it is a merged DLL of that original assembly, Subsonic itself and a few other referenced DLL's into a single assembly, called "PowershellDataAccess.dll. However, it should be noted that I've also tried this referencing each assembly individually in the script as well and that doesn't work either. I am then attempting to use the objects and methods in that assembly. In this case, I'm accessing a class that uses Subsonic to load a bunch of records and creates a Lucene index from those records. The problem I'm running into is that the call into the Subsonic method to retrieve data from the database says it can't find the connection string. I'm pointing the AppDomain at the appropriate config file which does contain that connection string, by name. Here's the script. $ScriptDir = Get-Location [System.IO.Directory]::SetCurrentDirectory($ScriptDir) [Reflection.Assembly]::LoadFrom("PowershellDataAccess.dll") [System.AppDomain]::CurrentDomain.SetData("APP_CONFIG_FILE", "$ScriptDir\App.config") $indexer = New-Object LuceneIndexingEngine.LuceneIndexGenerator $indexer.GeneratePageTemplateIndex("PageTemplateIndex"); I went digging into Subsonic itself and the following line in Subsonic is what's looking for the connection string and throwing the exception: ConfigurationManager.ConnectionStrings[connectionStringName] So, out of curiosity, I created an assembly with a single class that has a single property that just runs that one line to retrieve the connection string name. I created a ps1 that called that assembly and hit that property. That prototype can find the connection string just fine. Anyone have any idea why Subsonic's portion can't seem to see the connection strings?

    Read the article

  • Date query with Hibernate on Timestamp Column in PostgreSQL

    - by Shashikant Kore
    A table has timestamp column. A sample value in that could be 2010-03-30 13:42:42. With Hibernate, I am doing a range query Restrictions.between("column-name", fromDate, toDate). The Hibernate mapping for this column is as follows. <property name="orderTimestamp" column="order_timestamp" type="java.util.Date" /> Let's say, I want to find out all the records that have the date 30th March 2010 and 31st March 2010. A range query on this field is done as follows. Date fromDate = new SimpleDateFormat("yyyy-MM-dd").parse("2010-03-30"); Date toDate = new SimpleDateFormat("yyyy-MM-dd").parse("2008-03-31"); Expression.between("orderTimestamp", fromDate, toDate); This doesn't work. The query is converted to respective timestamps as "2010-03-30 00:00:00" and "2010-03-31 00:00:00". So, all the records for the 31st March 2010 are ignored. A simple solution to this problem could be to have the end date as "2010-03-31 23:59:59." But, I would like to know if there is way to match only the date part of the timestamp column. Also, is Expression.between() inclusive of both limits? Documentation doesn't throw any light on this.

    Read the article

  • Optimizing Levenshtein Distance Algorithm

    - by Matt
    I have a stored procedure that uses Levenshtein Distance to determine the result closest to what the user typed. The only thing really affecting the speed is the function that calculates the Levenshtein Distance for all the records before selecting the record with the lowest distance (I've verified this by putting a 0 in place of the call to the Levenshtein function). The table has 1.5 million records, so even the slightest adjustment may shave off a few seconds. Right now the entire thing runs over 10 minutes. Here's the method I'm using: ALTER function dbo.Levenshtein ( @Source nvarchar(200), @Target nvarchar(200) ) RETURNS int AS BEGIN DECLARE @Source_len int, @Target_len int, @i int, @j int, @Source_char nchar, @Dist int, @Dist_temp int, @Distv0 varbinary(8000), @Distv1 varbinary(8000) SELECT @Source_len = LEN(@Source), @Target_len = LEN(@Target), @Distv1 = 0x0000, @j = 1, @i = 1, @Dist = 0 WHILE @j <= @Target_len BEGIN SELECT @Distv1 = @Distv1 + CAST(@j AS binary(2)), @j = @j + 1 END WHILE @i <= @Source_len BEGIN SELECT @Source_char = SUBSTRING(@Source, @i, 1), @Dist = @i, @Distv0 = CAST(@i AS binary(2)), @j = 1 WHILE @j <= @Target_len BEGIN SET @Dist = @Dist + 1 SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j-1, 2) AS int) + CASE WHEN @Source_char = SUBSTRING(@Target, @j, 1) THEN 0 ELSE 1 END IF @Dist > @Dist_temp BEGIN SET @Dist = @Dist_temp END SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j+1, 2) AS int)+1 IF @Dist > @Dist_temp SET @Dist = @Dist_temp BEGIN SELECT @Distv0 = @Distv0 + CAST(@Dist AS binary(2)), @j = @j + 1 END END SELECT @Distv1 = @Distv0, @i = @i + 1 END RETURN @Dist END Anyone have any ideas? Any input is appreciated. Thanks, Matt

    Read the article

  • LINQ to SQL: filter nested objects with soft deletes

    - by Alex
    Hello everyone, I'm using soft deleting in my database (IsDeleted field). I'm actively using LoadWith and AssociateWith methods to retrieve and filter nested records. The thing is AssociateWith only works with properties that represents a one-to-many relationship. DataLoadOptions loadOptions = new DataLoadOptions(); loadOption.LoadWith<User>(u = > u.Roles); loadOption.AssociateWith<User>(u = > u.Roles.Where(r = > !r.IsDeleted)); In the example above I just say: I want to retrieve users with related (undeleted) roles. But when I have one-to-one relationship, e.g. Document - File (the only one file is related to the document) I'm unable to filter soft deleted object: DataLoadOptions loadOptions = new DataLoadOptions(); loadOption.LoadWith<Document>(d = > d.File); // the next certainly won't work loadOption.AssociateWith<File>(f = > !f.IsDeleted); So, is there any idea how to filter records within the one-to-one relationship? Thanks!

    Read the article

  • adjacency list creation , out of Memory error

    - by p1
    Hello , I am trying to create an adjacency list to store a graph.The implementation works fine while storing 100,000 records. However,when I tried to store around 1million records I ran into OutofMemory Error : Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.(String.java:215) at java.io.BufferedReader.readLine(BufferedReader.java:331) at java.io.BufferedReader.readLine(BufferedReader.java:362) at liarliar.main(liarliar.java:39) Following is my implementation HashMap<String,ArrayList<String>> adj = new HashMap<String,ArrayList<String>>(num); while ((str = in.readLine()) != null) { StringTokenizer Tok = new StringTokenizer(str); name = (String) Tok.nextElement(); cnt = Integer.valueOf(Tok.nextToken()); ArrayList<String> templist = new ArrayList<String>(cnt); while(cnt>0) { templist.add(in.readLine()); cnt--; } adj.put(name,templist); } //done creating a adjacency list I am wondering, if there is any better way to implement the adjacency list. Also, I know number of nodes right in the begining and , in the future I flatten the list as I visit nodes. Any suggestions ? Thanks

    Read the article

  • Displaying single-instance business object data on SSRS

    - by Lachlan
    I have a SQL Server Reporting Services local (i.e. RDLC) report displayed in a ReportViewer, with two subreports. I am using business objects to populate the datasets. What is the best way to populate single-instance data on my report, e.g. a dynamic title, or a text box that lists a calculated value, not based on the report data? I am currently displaying data using the following style: public class MyRecordList { string Name { get; set; } List<MyRecord> Records { get; set;} } public MyRecord { string Description { get; set;} string Value { get; set;} } I set the datasource to the Records in an instance of MyRecordsList, and they print out find in a table. But adding a textbox and and referring to Name displays nothing. I also tried turning Name into a List, and referring to the first in the list, using: =First(Fields!Name.Value, "Report1_MyRecordList") but still nothing is printed on the report.

    Read the article

  • Query Months help

    - by StealthRT
    Hey all i am in need of some helpful tips/advice on how to go about my problem. I have a database that houses a "signup" table. The date for this table is formated as such: 2010-04-03 00:00:00 Now suppose i have 10 records in this database: 2010-04-03 00:00:00 2010-01-01 00:00:00 2010-06-22 00:00:00 2010-02-08 00:00:00 2010-02-05 00:00:00 2010-03-08 00:00:00 2010-09-29 00:00:00 2010-11-16 00:00:00 2010-04-09 00:00:00 2010-05-21 00:00:00 And i wanted to get each months total registers... so following the example above: Jan = 1 Feb = 2 Mar = 1 Apr = 2 May = 1 Jun = 1 Jul = 0 Aug = 0 Sep = 1 Oct = 0 Nov = 1 Dec = 0 Now how can i use a query to do that but not have to use a query like: WHERE left(date, 7) = '2010-01' and keep doing that 12 times? I would like it to be a single query call and just have it place the months visits into a array like so: do until EOF theMonthArray[0] = "total for jan" theMonthArray[1] = "total for feb" theMonthArray[2] = "total for mar" theMonthArray[3] = "total for apr" ...etc loop I just can not think of a way to do that other than the example i posted with the 12 query called-one for each month. This is my query as of right now. Again, this only populates for one month where i am trying to populate all 12 months all at once. SELECT count(idNumber) as numVisits, theAccount, signUpDate, theActive from userinfo WHERE theActive = 'YES' AND idNumber = '0203' AND theAccount = 'SUB' AND left(signUpDate, 7) = '2010-04' GROUP BY idNumber ORDER BY numVisits; The example query above outputs this: numVisits | theAccount | signUpDate | theActive 2 SUB 2010-04-16 00:00:00 YES Which is correct because i have 2 records within the month of April. But again, i am trying to do all 12 months at one time (in a single query) so i do not tax the database server as much when compared to doing 12 different query's... UPDATE I'm looking to do something like along these lines: if NOT rst.EOF if left(rst("signUpDate"), 7) = "2010-01" then theMonthArray[0] = rst("numVisits") end if if left(rst("signUpDate"), 7) = "2010-02" then theMonthArray[1] = rst("numVisits") end if etc etc.... end if Any help would be great! :) David

    Read the article

  • Excel Regex, or export to Python? ; "Vlookup" in Python?

    - by victorhooi
    heya, We have an Excel file with a worksheet containing people records. 1. Phone Number Sanitation One of the fields is a phone number field, which contains phone numbers in the format e.g.: +XX(Y)ZZZZ-ZZZZ (where X, Y and Z are integers). There are also some records which have less digits, e.g.: +XX(Y)ZZZ-ZZZZ And others with really screwed up formats: +XX(Y)ZZZZ-ZZZZ / ZZZZ or: ZZZZZZZZ We need to sanitise these all into the format: 0YZZZZZZZZ (or OYZZZZZZ with those with less digits). 2. Fill in Supervisor Details Each person also has a supervisor, given as an numeric ID. We need to do a lookup to get the name and email address of that supervisor, and add it to the line. This lookup will be firstly on the same worksheet (i.e. searching itself), and it can then fallback to another workbook with more people. 3. Approach? For the first issue, I was thinking of using regex in Excel/VBA somehow, to do the parsing. My Excel-fu isn't the best, but I suppose I can learn...lol. Any particular points on this one? However, would I be better off exporting the XLS to a CSV (e.g. using xlrd), then using Python to fix up the phone numbers? For the second approach, I was thinking of just using vlookups in Excel, to pull in the data, and somehow, having it fall through, first on searching itself, then on the external workbook, then just putting in error text. Not sure how to do that last part. However, if I do happen to choose to export to CSV and do it in Python, what's an efficient way of doing the vlookup? (Should I convert to a dict, or just iterate? Or is there a better, or more idiomatic way?) Cheers, Victor

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • SQL Joining on a one-to-many relationship

    - by Harley
    Ok, here was my original question; Table one contains ID|Name 1 Mary 2 John Table two contains ID|Color 1 Red 1 Blue 2 Blue 2 Green 2 Black I want to end up with is ID|Name|Red|Blue|Green|Black 1 Mary Y Y 2 John Y Y Y It seems that because there are 11 unique values for color and 1000's upon 1000's of records in table one that there is no 'good' way to do this. So, two other questions. Is there an efficient way to query to get this result? I can then create a crosstab in my application to get the desired result. ID|Name|Color 1 Mary Red 1 Mary Blue 2 John Blue 2 John Green 2 John Black If I wanted to limit the number of records returned how could I do a query to do something like this? Where ((color='blue') AND (color<>'red' OR color<>'green')) So using the above example I would then get back ID|Name|Color 1 Mary Blue 2 John Blue 2 John Black I connect to Visual FoxPro tables via ADODB to use SQL. Thanks!

    Read the article

  • Query Optimizing Request

    - by mithilatw
    I am very sorry if this question is structured in not a very helpful manner or the question itself is not a very good one! I need to update a MSSQL table call component every 10 minutes based on information from another table call materials_progress I have nearly 60000 records in component and more than 10000 records in materials_progress I wrote an update query to do the job, but it takes longer than 4 minutes to complete execution! Here is the query : UPDATE component SET stage_id = CASE WHEN t.required_quantity <= t.total_received THEN 27 WHEN t.total_ordered < t.total_received THEN 18 ELSE 18 END FROM ( SELECT mp.job_id, mp.line_no, mp.component, l.quantity AS line_quantity, CASE WHEN mp.component_name_id = 2 THEN l.quantity*2 ELSE l.quantity END AS required_quantity, SUM(ordered) AS total_ordered, SUM(received) AS total_received , c.component_id FROM line l LEFT JOIN component c ON c.line_id = l.line_id LEFT JOIN materials_progress mp ON l.job_id = mp.job_id AND l.line_no = mp.line_no AND c.component_name_id = mp.component_name_id WHERE mp.job_id IS NOT NULL AND (mp.cancelled IS NULL OR mp.cancelled = 0) AND (mp.manual_override IS NULL OR mp.manual_override = 0) AND c.stage_id = 18 GROUP BY mp.job_id, mp.line_no, mp.component, l.quantity, mp.component_name_id, component_id ) AS t WHERE component.component_id = t.component_id I am not going to explain the scenario as it too complex.. could somebody please please tell me what makes this query this much expensive and a way to get around it? Thank you very very much in advance!!!

    Read the article

  • Using Oracle hint "FIRST_ROWS" to improve Oracle database performances

    - by bobetko
    I have a statement that runs on Oracle database server. The statement has about 5 joins and there is nothing unusual there. It looks pretty much like below: SELECT field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 The problem (and what is interesting) is that statement for userid = 1 takes 1 second to return 590 records. Statement for userid = 2 takes around 30 seconds to return 70 records. I don't understand why is difference so big. It seems that different execution plan is chosen for statement with userid = 1 and different for userid = 2. After I implemented Oracle Hint FIRST_ROW, performance become significantly better. Both statements (for both ids 1 and 2) produce return in under 1 second. SELECT /*+ FIRST_ROWS */ field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 Questions: 1) What are possible reasons for bad performance when userid = 2 (when hint is not used)? 2) Why would execution plan be different for one vs another statement (when hint is not used)? 3) Is there anything that I should be careful about when deciding to add this hint to my queries? Thanks

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • SQL Full-Text Indexing Issue

    - by Phil
    UPDATE: I have figured out a way using a form of dynamic sql to fix this problem, thanks anyway for any help. Hi, there is something that I need to accomplish with the use of Full-Text Indexing. This is it: The fact of the matter is when I run a query (with a stored procedure) that looks like (with a parameter (@name) that was obviously defined above (not shown here), this parameter is sent to the stored procedure by an asp.net page, from user input): SELECT Name FROMdbo.UsersTable WHERE FREETEXT(Name, @name) Well, the fact of the matter is that a query like this will return values if, say the parameter @name's value is Joe, and say, there are 10 records of names with Joe in them, but if @name's value is just Jo, then it returns nothing, and this is the problem. Say that there are other records in this table that have Jo in them, like for example, Jole, or John. So the real question is, how do I get it to return values that are not full words, or phrases, but just from part of the word/phrase (like I said above)? Like FREETEXT(Name, @name*), which is not allowed to be used as a query, but, you get the idea. Is there a way to accomplish this? I'm sure there must be, I need to figure this out. Thanks for any help.

    Read the article

  • Asynchronously populate datagridview in Windows Forms application

    - by dcryptd
    howzit! I'm a web developer that has been recently requested to develop a Windows forms application, so please bear with me (or dont laugh!) if my question is a bit elementary. After many sessions with my client, we eventually decided on an interface that contains a tabcontrol with 5 tabs. Each tab has a datagridview that may eventually hold up to 25,000 rows of data (with about 6 columns each). I have successfully managed to bind the grids when the tab page is loaded and it works fine for a few records, but the UI freezes when I bound the grid with 20,000 dummy records. The "freeze" occurs when I click on the tab itself, and the UI only frees up (and the tab page is rendered) once the bind is complete. I communicated this to the client and mentioned the option of paging for each grid, but she is adament w.r.t. NOT wanting this. My only option then is to look for some asynchronous method of doing this in the background. I don't know much about threading in windows forms, but I know that I can use the BackgroundWorker control to achieve this. My only issue after reading up a bit on it is that it is ideally used for "long-running" tasks and I/O operations. My questions: How does one determine a long-running task? How does one NOT MISUSE the BackgroundWorker control, ie. is there a general guideline to follow when using this? (I understand that opening/spawning multiple threads may be undesirable in certain instances) Most importantly: How can I achieve (asychronously) binding of the datagridview after the tab page - and all its child controls - loads. Thank you for reading this (ahem) lengthy query, and I highly appreciate any responses/thoughts/directions on this matter! Cheers!

    Read the article

  • GridViewColumn not subscribing to PropertyChanged event in a ListView

    - by Chris Wenham
    I have a ListView with a GridView that's bound to the properties of a class that implements INotifyPropertyChanged, like this: <ListView Name="SubscriptionView" Grid.Column="0" Grid.ColumnSpan="2" Grid.Row="2" ItemsSource="{Binding Path=Subscriptions}"> <ListView.View> <GridView> <GridViewColumn Width="24" CellTemplate="{StaticResource IncludeSubscriptionTemplate}"/> <GridViewColumn Width="150" DisplayMemberBinding="{Binding Path=Name}" Header="Subscription"/> <GridViewColumn Width="75" DisplayMemberBinding="{Binding Path=RecordsWritten}" Header="Records"/> <GridViewColumn Width="Auto" CellTemplate="{StaticResource FilenameTemplate}"/> </GridView> </ListView.View> </ListView> The class looks like this: public class Subscription : INotifyPropertyChanged { public int RecordsWritten { get { return _records; } set { _records = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("RecordsWritten")); } } private int _records; ... } So I fire up a BackgroundWorker and start writing records, updating the RecordsWritten property and expecting the value to change in the UI, but it doesn't. In fact, the value of PropertyChanged on the Subscription objects is null. This is a puzzler, because I thought WPF is supposed to subscribe to the PropertyChanged event of data objects that implement INotifyPropertyChanged. Am I doing something wrong here?

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • OleDbExeption Was unhandled in VB.Net

    - by ritch
    Syntax error (missing operator) in query expression '((ProductID = ?) AND ((? = 1 AND Product Name IS NULL) OR (Product Name = ?)) AND ((? = 1 AND Price IS NULL) OR (Price = ?)) AND ((? = 1 AND Quantity IS NULL) OR (Quantity = ?)))'. I need some help sorting this error out in Visual Basics.Net 2008. I am trying to update records in a MS Access Database 2008. I have it being able to update one table but the other table is just not having it. Private Sub Admin_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'Reads Users into the program from the text file (Located at Module.VB) ReadUsers() 'Connect To Access 2007 Database File con.ConnectionString = ("Provider=Microsoft.ACE.OLEDB.12.0;" & "Data Source=E:\Computing\Projects\Login\Login\bds.accdb;") con.Open() 'SQL connect 1 sql = "Select * From Clients" da = New OleDb.OleDbDataAdapter(sql, con) da.Fill(ds, "Clients") MaxRows = ds.Tables("Clients").Rows.Count intCounter = -1 'SQL connect 2 sql2 = "Select * From Products" da2 = New OleDb.OleDbDataAdapter(sql2, con) da2.Fill(ds, "Products") MaxRows2 = ds.Tables("Products").Rows.Count intCounter2 = -1 'Show Clients From Database in a ComboBox ComboBoxClients.DisplayMember = "ClientName" ComboBoxClients.ValueMember = "ClientID" ComboBoxClients.DataSource = ds.Tables("Clients") End Sub The button, the error appears on da2.update(ds, "Products") Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button4.Click Dim cb2 As New OleDb.OleDbCommandBuilder(da2) ds.Tables("Products").Rows(intCounter2).Item("Price") = ProductPriceBox.Text da2.Update(ds, "Products") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub However the code works on updating another table Private Sub UpdateButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles UpdateButton.Click 'Allows users to update records in the Database Dim cb As New OleDb.OleDbCommandBuilder(da) 'Changes the database contents with the content in the text fields ds.Tables("Clients").Rows(intCounter).Item("ClientName") = ClientNameBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientID") = ClientIDBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientAddress") = ClientAddressBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientTelephoneNumber") = ClientNumberBox.Text 'Updates the table withing the Database da.Update(ds, "Clients") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >