Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 79/210 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

  • LINQ Normalizing data

    - by Brennan Mann
    I am using an OMS that stores up to three line items per record in the database. Below is an example of an order containing five line items. Order Header Order Detail Prod 1 Prod 2 Prod 3 Order Detail Prod 4 Prod 5 One order header record and two detail records. My goal is have a one to one relation for details records(i.e., one detail record per line item). In the past, I used an UNION ALL SQL statement to extract the data. Is there a better approcach to this problem using LINQ? Below is my first attempt at using LINQ. Any feedback, suggestions or recommendations would greatly be appreciated. For what I have read, an UNION statement can tax the process? var orderdetail = (from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_001, price = o.EXTPRICES_001, qty = o.ITEMQTYS_001 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_002, price = o.EXTPRICES_002, qty = o.ITEMQTYS_002 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_003, price = o.EXTPRICES_003, qty = o.ITEMQTYS_003 });

    Read the article

  • Core Data NSPredicate to filter results

    - by Bryan
    I have a NSManagedObject that contains a bID and a pID. Within the set of NSManagedObjects, I only want a subset returned and I'm struggling to find the correct NSPredicate or way to get what I need out of Core Data. Here's my full list: bid pid 41 0 42 41 43 0 44 0 47 41 48 0 49 0 50 43 There is a parent-child relationship above. Rules: If a record's PID = 0, it means that that record IS a parent record. If a record's PID != 0, then that record's PID refers to it's parent record's BID. Example: 1) BID = 41 is a parent record. Why? Because records BID=42 and record BID=47 have PID's of 41, meaning those are children of its PID record. 2) BID = 42 has a parent record with a BID = 41. 3) BID = 43 is a parent record. 4) BID = 44 is a parent record. 5) BID = 47 has a parent record with a BID = 41 because its PID = 41. See #1 above. 6) BID = 48 is a parent record. 7) BID = 49 is a parent record. 8) BID = 50 is a child record, and its parent record has a BID = 43. See the pattern? Now, basically from that, I want only the following rows fetched: bid pid 44 0 47 41 48 0 49 0 50 43 BID = 41, BID = 48, BID = 49 should all be returned because there are no records with a PID equal to their BID. BID = 47 should be returned because it is the most recent child of PID = 41. BID = 50 should be returned because it is the most recent child of PID = 43. Hope this helps explain it more.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Optimize inserts

    - by ikerib
    Hi! I did an importer in VB .Net witch get data from an SQLServer an inserts this data throught ADSL connection in a remote MySQL server. in the first time, it was like 200 records, but now there are more than 500.000 records and it expends like 11hours exporting all the data and that is bad, veryyy bad. I need to optimize my importer, witch now gets the data into a datatable an them i have a function witch with a loop (row to row) inserts the data with a "insert into" query... like this: For Each dr As DataRow In dt.Rows Console.Write(".") Dim sql As String = "INSERT INTO clientes(id,nombrefis,nombrecom,direccion,codpos,municipio_id,telefono,fax,cif)" & _ "VALUES (@id,@nombrefis,@nombrecom,@direccion,@codpos,@municipio_id,@telefono,@fax,@cif)" cmd = New MySqlCommand(sql, cnn) cmd.Parameters.AddWithValue("id", Int32.Parse(dr("ID EMPRESA").ToString)) cmd.Parameters.AddWithValue("nombrefis", dr("NOMEMP")) cmd.Parameters.AddWithValue("nombrecom", dr("EMPRESA")) cmd.Parameters.AddWithValue("direccion", dr("DIRECC")) cmd.Parameters.AddWithValue("codpos", dr("CODPOS")) cmd.Parameters.AddWithValue("municipio_id", Int32.Parse(dr("CODIGO MUNICIPIO")).ToString) cmd.Parameters.AddWithValue("telefono", dr("TELEF")) cmd.Parameters.AddWithValue("fax", dr("FAX")) cmd.Parameters.AddWithValue("cif", dr("CIF")) cmd.ExecuteNonQuery() Next any ideas or advices? thanks so much

    Read the article

  • How can I serialize and communicate ActiveRecord instances across identical Rails apps?

    - by Blaine LaFreniere
    The main idea is that I have several worker instances of a Rails app, and then a main aggregate I want to do something like this with the following pseudo pseudo-code posts = Post.all.to_json( :include => { :comments => { :include => :blah } }) # send data to another, identical, exactly the same Rails app # ... # Fast forward to the separate but identical Rails app: # ... # remote_posts is the posts results from the first Rails app posts = JSON.parse(remote_posts) posts.each do |post| p = Post.new p = post p.save end I'm shying away from Active Resource because I have thousands of records to create, which would mean thousands of requests for each record. Unless there is a way to do it all in one request with Active Resource that is simple, I'd like to avoid it. Format doesn't matter. Whatever makes it convenient. The IDs don't need to be sent, because the other app will just be creating records and assigning new IDs in the "aggregate" system. The hierarchy would need to be preserved (E.g. "Hey other Rails app, I have genres, and each genre has an artist, and each artist has an album, and each album has songs" etc.)

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • Linq to SQL generates StackOverflowException in tight Insert loop

    - by ChrisW
    I'm parsing an XML file and inserting the rows into a table (and related tables) using LinqToSQL. I parse the XML file using LinqToXml into IEnumerable. Then, I create a foreach loop, where I build my LinqToSQL objects and call InsertOnSubmit and SubmitChanges at the end of each loop. Nothing special here. Usually, I make it through around 4,100 records before receiving a StackOverflowException from LinqToSql, right as I call SubmitChanges. It's not always on 4,100... sometimes it's 4102, sometimes, less, etc. I've tried inserting the records that generate the failure individually, but putting them in their own Xml file, but that inserts fine... so it's not the data. I'm running the whole process from an MVC2 app that is uploading the Xml file to the server. I've adjusted my WebRequest timeouts to appropriate values, and again, I'm not getting timeout errors, just StackOverflowExceptions. So is there some pattern that I should follow for times when I have to do many insertions into the database? I never encounter this exception on smaller Xml files, just larger ones.

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by sharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • Very interesting problem in Compact Framework

    - by Alexander
    Hi, i have a performance problem while inserting data to sqlce.I'm reading string and making inserts to My tables.In LU_MAM table,i insert 1000 records withing 8 seconds.After Mam tables i make some inserts but my largest table is CR_MUS.When i want to insert record into CR_MUS,it takes too much time.CR_MUS has 2000 records and insert takes 35 seconds.What can be reason?I use same logic in my insert functions.Do u have any idea?I use VS 2008 sp1. Dim reader As StringReader reader = New StringReader(data) cn = New SqlCeConnection(General.ConnString) cn.Open() If myTransfer.ClearTables(cn, cmd) = True Then progress = 0 '------------------------------------------ cmd = New SqlServerCe.SqlCeCommand Dim rs As SqlCeResultSet cmd.Connection = cn cmd.CommandType = CommandType.TableDirect Dim rec As SqlCeUpdatableRecord ' name of table While reader.Peek > -1 If strerr_col = "" Then satir = reader.ReadLine() ayrac = Split(satir, "|") If ayrac(0).ToString() = "LC" Then prgsbar.Maximum = Convert.ToInt32(ayrac(1)) ElseIf ayrac(0).ToString = "PPAR" Then . If ayrac(2).ToString <> General.PMVer Then ShowWaitCursor(False) txtDurum.Text = "Wrong Version" Exit Sub End If If p_POCKET_PARAMETERS = True Then cmd.CommandText = "POCKET_PARAMETERS" txtDurum.Text = "POCKET_PARAMETERS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_POCKET_PARAMETERS = False End If strerr_col = myVERI_AL.POCKET_PARAMETERS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString() = "MAM" Then If p_LU_MAM = True Then txtDurum.Text = "LU_MAM " cmd.CommandText = "LU_MAM" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_LU_MAM = False End If strerr_col = myVERI_AL.LU_MAM_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString = "KMUS" Then If p_CR_MUS = True Then cmd.CommandText = "CR_MUS" txtDurum.Text = "CR_MUS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_TR_KAMPANYA_MALZEME = False End If strerr_col = myVERI_AL.CR_MUS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 end while Public Function CR_KAMPANYA_MUSTERI_I(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet, ByRef rec As SqlCeUpdatableRecord) As String Try rec.SetValue(0, If(f_Line(1) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(1))) rec.SetValue(1, If(f_Line(2) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(2))) rec.SetValue(2, If(f_Line(3) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(3))) rec.SetValue(3, If(f_Line(5) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(5))) rec.SetValue(4, If(f_Line(6) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(6))) rs.Insert(rec) Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Query for rows including child rows

    - by MAD9
    A few weeks ago, I asked a question about how to generate hierarchical XML from a table, that has a parentID column. It all works fine. The point is, according to the hierarchy, I also want to query a table. I'll give you an example: Thats the table with the codes: ID CODE NAME PARENTID 1 ROOT IndustryCode NULL 2 IND Industry 1 3 CON Consulting 1 4 FIN Finance 1 5 PHARM Pharmaceuticals 2 6 AUTO Automotive 2 7 STRAT Strategy 3 8 IMPL Implementation 3 9 CFIN Corporate Finance 4 10 CMRKT Capital Markets 9 From which I generate (for displaying in a TreeViewControl) this XML: <record key="1" parentkey="" Code="ROOT" Name="IndustryCode"> <record key="2" parentkey="1" Code="IND" Name="Industry"> <record key="5" parentkey="2" Code="PHARM" Name="Pharmaceuticals" /> <record key="6" parentkey="2" Code="AUTO" Name="Automotive" /> </record> <record key="3" parentkey="1" Code="CON" Name="Consulting"> <record key="7" parentkey="3" Code="STRAT" Name="Strategy" /> <record key="8" parentkey="3" Code="IMPL" Name="Implementation" /> </record> <record key="4" parentkey="1" Code="FIN" Name="Finance"> <record key="9" parentkey="4" Code="CFIN" Name="Corporate Finance"> <record key="10" parentkey="9" Code="CMRKT" Name="Capital Markets" /> </record> </record> </record> As you can see, some codes are subordinate to others, for example AUTO << IND << ROOT What I want (and have absolutely no idea how to realise or even, where to start) is to be able to query another table (where one column is this certain code of course) for a code and get all records with the specific code and all subordinate codes For example: I query the other table for "IndustryCode = IND[ustry]" and get (of course) the records containing "IND", but also AUTO[motive] and PHARM[aceutical] (= all subordinates) Its an SQL Express Server 2008 with Advanced Services.

    Read the article

  • How do I get the inserted id (or object) after an insert with the FormView/ObjectDataSource controls

    - by drs9222
    I have a series of classes that loosely fit the following pattern: public class CustomerInfo { public int Id {get; set;} public string Name {get; set;} } public class CustomerTable { public bool Insert(CustomerInfo info) { /*...*/ } public bool Update(CustomerInfo info) { /*...*/ } public CustomerInfo Get(int id) { /*...*/ } /*...*/ } After a successful insert the Insert method will set the Id property of the CustomerInfo object that was passed in. I've used these classes in several places and would prefer to altering them. Now I'm experimenting with writing an ASP.NET page for inserting and updating the records. I'm currently using the ObjectDataSource and FormView controls: <asp:ObjectDataSource TypeName="CustomerTable" DataObjectTypeName="CustomerInfo" InsertMethod="Insert" UpdateMethod="Update" SelectMethod="Get" /> I can successfully Insert and Update records. I would like to switch the FormView's mode from Insert to Edit after a successful insert. My first attempt was to switch the mode in the ItemInserted event. This of course did not work. I was using a QueryStringParameter for the id which of course wan't set when inserting. So, I switched to manually populating the InputParameters during the ObjectDataSource's Selecting event. The problem with this is I need to know the id of the newly inserted record which I can't find a good way to get. I understand that I can access the Insert method's return value, and out parameters in the ItemInserted event of course my method doesn't return the id using any of these methods. I can't find anyway to access the id or the CustomerInfo object that was inserted after the insert completes. The best I've been able to do is to save the CustomerInfo object in the ObjectDataSource's Inserting event. This feels like an odd way to do this. I figure that there must be a better way to do this and I'll kick myself when I see it. Any ideas?

    Read the article

  • Need a code snippet for backward paging...

    - by Ali
    Hi guys I'm in a bit on a fix here. I know how easy it is to build simple pagination links for dynamic pages whereby you can navigate between partial sets of records from sql queries. However the situation I have is as below: COnsider that I wish to paginate between records listed in a flat file - I have no problem with the retrieval and even the pagination assuming that the flat file is a csv file with the first field as an id and new reocrds on new lines. However I need to make a pagination system which paginates backwards i.e I want the LAST entry in the file to appear as the first as so forth. Since I don't have the power of sql to help me here I'm kinda stuck - all I have is a fixed sequence which needs to be paginated, also note that the id mentioned as first field is not necessarily numeric so forget about sorting by numerics here. I basically need a way to loop through the file but backwards and paginate it as such. How can I do that - I'm working in php - I just need the code to loop through and paginate i.e how to tell which is the offset and which is the current page etc.

    Read the article

  • WPF: How to connect to a sql ce 3.5 database with ADO.NET?

    - by EV
    Hi, I'm trying to write an application that has two DataGrids - the first one displays customers and the second one selected customer's records. I have generated the ADO.NET Entity Model and the connection property are set in App.config file. Now I want to populate the datagrids with the data from sql ce 3.5 database which does not support LINQ-TO-SQL. I used this code for the first datagrid in code behind: CompanyDBEntities db; private void Window_Loaded(object sender, RoutedEventArgs e) { db = new CompanyDBEntities(); ObjectQuery<Employees> departmentQuery = db.Employees; try { // lpgrid is the name of the first datagrid this.lpgrid.ItemsSource = departmentQuery; } catch (Exception ex) { MessageBox.Show(ex.Message); } However, the datagrid is empty although there are records in a database. It would also be better to use it in MVVM pattern but I'm not sure how does the ado.net and MVVM work together and if it is even possible to use them like that. Could anyone help? Thanks in advance.

    Read the article

  • T-SQL - Left Outer Joins - Fileters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4)

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >