Search Results

Search found 27337 results on 1094 pages for 'sql'.

Page 627/1094 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • how do i update database using ADODB.Recordset?

    - by every_answer_gets_a_point
    i am using excel to connect to a mysql database: Dim dpath, atime, rtime, lcalib, aname, rname, bstate, instrument As String Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable Worksheets.Item("Report 1").Select dpath = Trim(Range("B2").Text) atime = Trim(Range("B3").Text) rtime = Trim(Range("B4").Text) lcalib = Trim(Range("B5").Text) aname = Trim(Range("B6").Text) rname = Trim(Range("B7").Text) bstate = Trim(Range("B8").Text) ' instrument = GetInstrFromXML(wbBook.FullName) With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With what is the next step? how do i do an rs.execute?

    Read the article

  • UUID collision risk using different algorithms

    - by Diego Jancic
    Hi Guys, I have a database where 2 (or maybe 3 or 4) different applications are inserting information. The new information has IDs of the type GUID/UUID, but each application is using a different algorithm to generate the IDs. For example, one is using the NHibernate's "guid.comb", other is using the SQLServer's NEWID(), other might want to use .NET's Guid.NewGuid() implementation. Is there an above normal risk of ID collision or duplicates? Thanks!

    Read the article

  • Oracle (Old?) Joins - A tool/script for conversion?

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? Even better.. Is there a tool/script that will do this conversion for me (Preferred Free)? An optimizer of some sort? I have around 500 of these queries to port.. When was this standard phased out? Any info is appreciated.

    Read the article

  • MySQL Query Join Table Selecting Highest Date Value

    - by ALHUI
    Here is the query that I run SELECT cl.cl_id, cc_rego, cc_model, cl_dateIn, cl_dateOut FROM courtesycar cc LEFT JOIN courtesyloan cl ON cc.cc_id = cl.cc_id Results: 1 NXI955 Prado 2013-10-24 11:48:38 NULL 2 RJI603 Avalon 2013-10-24 11:48:42 2013-10-24 11:54:18 3 RJI603 Avalon 2013-10-24 12:01:40 NULL The results that I wanted are to group by the cc_rego values and print the most recent cl_dateIn value. (Only Display Rows 1,3) Ive tried to use MAX on the date and group by clause, but it combines rows, 2 & 3 together showing both the highest value of dateIn and dateOut. Any help will be appreciated.

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • How can I combine result and subquery for IN comparison (mysql)

    - by user325804
    In order for a school project i need to create the following situation within one mysql query. The situation is as such, that a child's tags and a parent's tags need to be combined into one, and compared to a site's tags, depending on a few extra simple equals to lines. For this to happen I only see the option that the result of a subquery is combined with a sub query within that query, as such: SELECT tag.*, (SELECT group_concat(t1.id, ',', (SELECT group_concat(tag.id) FROM adcampaign INNER JOIN adcampaign_tag ON adcampaign.id = adcampaign_tag.adcampaign_id INNER JOIN tag ON adcampaign_tag.tag_id = tag.id WHERE adcampaign.id = 1)) FROM ad, ad_tag, tag AS t1 WHERE ad.id = ad_tag.ad_id AND ad_tag.tag_id = t1.id AND ad.adcampaign_id = 1 AND ad.agecategory_id = 1 AND ad.adsize_id = 1 AND ad.adtype_id = 1) as tags FROM tag WHERE tag.id IN tags But the IN comparison only returns the first result because now the tags aren't a list but a concanated string. Anyone got any suggestion on this? I really need a way to combine it into one array

    Read the article

  • Users Hierarchy Logic

    - by user342944
    Hi guys, I am writing a user security module using SQLServer 2008 so threfore need to design a database accordingly. Formally I had Userinfo table with UserID, Username and ParentID to build a recursion and populated tree to represent hierarchy but now I have following criteria which I need to develop. I have now USERS, ADMINISTRATORS and GROUPS. Each node in the user hierarchy is either a user, administrator or group. User Someone who has login access to my application Administrator A user who may also manage all their child user accounts (and their children etc) This may include creating new users and assigning permissions to those users. There is no limit to the number of administrators in user structure. The higher up in the hierarchy that I go administrators have more child accounts to manage which include other child administrators. Group A user account can be designated as a group. This will be an account which is used to group one or more users together so that they can be manage as a unit. But no one can login to my application using a group account. This is how I want to create structure Super Administrator administrator ------------------------------------------------------------- | | | Manager A Manager B Manager C (adminstrator) (administrator) (administrator) | ----------------------------------------- | | | Employee A Employee B Sales Employees (User) (User) (Group) | ------------------------ | | | Emp C Emp D Emp E (User) (User) (User) Now how to build the table structure to achieve this. Do I need to create Users table alongwith Group table or what? Please guide I would really appreciate.

    Read the article

  • SQL only row mapping record fetching

    - by Prasanna
    I have a customer call detail table in which call details of all customer stored. I have to find out the distinct aparty (means our customer ) who only calls our customers (means bparty also be our numbers) . There is no other domestic call , International calls made by A party (our customer) in this case. could you people please help me to find the same data. FILE INPUT oF SAMPLE CDR TABLE ROW NAME VALUES ANUMBER :-any mobile number(Domestic+International); for our customer it must like 70,070,0070,9370) BNUMBER :-any mobile number(Domestic+International); for our customer it must like 70,070,0070,9370 CALLTRANSACTION :-eg: 91,92,93 etc CALLTRANSACTIONTYPEC :-eg: MOC,MTC FILENAME :-MCS_01 etc TIME:- any time value Required Output DISTINCT ANUMBER :-for our customer it mobile number must start with 70 or 070 or 0070 or 9370 BNUMBER :- for our customer it mobile number must start with 70 or 070 or 0070 or 9370 means our customer only calls to our network customer ( No other doestic call or international calls made by our operator)

    Read the article

  • Reducing a normalized table to one value

    - by Dio
    Hello, I'm sure this has been asked but I'm not quite sure how to properly search for this question, my apologies. I have two tables, Foo and Bar. For has one row per Food, bar has many rows per food matching descriptors. Foo name id Apple 1 Orange 2 Bar id description 1 Tasty 1 Ripe 2 Sweet etc (sorry for the somewhat contrived example). I'm trying to return a query where if, for each row in Foo, Bar contains a descriptor in ('Tasty', 'Juicy') return true ex: Output Apple True Orange False I had been solving this somewhat trivially with a case when I only had one item to match select Foo.name, case bar.description when 'Tasty' then True else 'False' end from Foo left join Bar on foo.id = bar.id where bar.description = 'Tasty' But with multiple items, I keep ending up with extra rows: Output Apple True Apple False etc etc Can someone point me in the right direction on how to think about this problem or what I should be doing? Thank you.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • CTE to build a list of departments and managers (hierarchical)

    - by Milky Joe
    I need to generate a list of users that are managers, or managers of managers, for company departments. I have two tables; one details the departments and one contains the manager hierarchy (simplified): CREATE TABLE [dbo].[Manager]( [ManagerId] [int], [ParentManagerId] [int]) CREATE TABLE [dbo].[Department]( [DepartmentId] [int], [ManagerId] [int]) Basically, I'm trying to build a CTE that will give me a list of DepartmentIds, together with all ManagerIds that are in the manager hierarchy for that department. So... Say Manager 1 is the Manager for Department 1, and Manager 2 is Manager 1's Manager, and Manager 3 is Manager 2's Manager, I'd like to see: DepartmentId, ManagerId 1, 1 1, 2 1, 3 Basically, managers are able to deal with all of their sub-manager's departments. Building the CTE to return the Manager hierarchy was fairly simple, but I'm struggling to inject the Departments in there: WITH DepartmentManagers AS ( SELECT ManagerId, ParentManagerId, 0 AS Depth From Manager UNION ALL SELECT Manager.ManagerId, Manager.ParentManagerId, DepartmentManagers.Depth + 1 AS Depth FROM Manager INNER JOIN DepartmentManagers ON DepartmentManagers.ManagerId = Manager.ParentManagerId ) Can anyone help?

    Read the article

  • how to increase the limit of exceptions in oracle

    - by Arunachalam
    how to increase the limit of exceptions in oracle ? i have a excel sheet in which their are about 900 records to be appended .so i converted the excel to dat file and wrote a batch file that read from the dat file and appends it to the concern table but the batch file stop execution once the exceptions reach 51(all integrity constrain parent key not found) so the remaining valid files are not updated .its very difficult to find which record has integrity constrain is there a way to increase this exception limit ?

    Read the article

  • Getting a linq table to be dynamically sent to a method

    - by Damian Spaulding
    I have a procedure: var Edit = (from R in Linq.Products where R.ID == RecordID select R).Single(); That I would like to make "Linq.Products" Dynamic. Something like: protected void Page_Load(object sender, EventArgs e) { something(Linq.Products); } public void something(Object MyObject) { System.Data.Linq.Table<Product> Dynamic = (System.Data.Linq.Table<Product>)MyObject; var Edit = (from R in Dynamic where R.ID == RecordID select R).Single(); My problem is that I my "something" method will not be able to know what table has been sent to it. so the static line: System.Data.Linq.Table Dynamic = (System.Data.Linq.Table)MyObject; Would have to reflect somthing like: System.Data.Linq.Table Dynamic = (System.Data.Linq.Table)MyObject; With being a dynamic catch all variable so that Linq can just execute the code just like I hand coded it statically. I have been pulling my hair out with this one. Please help.

    Read the article

  • Insert Stored Procedure does not Create Database Record

    - by SidC
    Hello All, I have the following stored procedure: ALTER PROCEDURE Pro_members_Insert @id int outPut, @LoginName nvarchar(50), @Password nvarchar(15), @FirstName nvarchar(100), @LastName nvarchar(100), @signupDate smalldatetime, @Company nvarchar(100), @Phone nvarchar(50), @Email nvarchar(150), @Address nvarchar(255), @PostalCode nvarchar(10), @State_Province nvarchar(100), @City nvarchar(50), @countryCode nvarchar(4), @active bit, @activationCode nvarchar(50) AS declare @usName as varchar(50) set @usName='' select @usName=isnull(LoginName,'') from members where LoginName=@LoginName if @usName <> '' begin set @ID=-3 RAISERROR('User Already exist.', 16, 1) return end set @usName='' select @usName=isnull(email,'') from members where Email=@Email if @usName <> '' begin set @ID=-4 RAISERROR('Email Already exist.', 16, 1) return end declare @MemID as int select @memID=isnull(max(ID),0)+1 from members INSERT INTO members ( id, LoginName, Password, FirstName, LastName, signupDate, Company, Phone, Email, Address, PostalCode, State_Province, City, countryCode, active,activationCode) VALUES ( @Memid, @LoginName, @Password, @FirstName, @LastName, @signupDate, @Company, @Phone, @Email, @Address, @PostalCode, @State_Province, @City, @countryCode, @active,@activationCode) if @@error <> 0 set @ID=-1 else set @id=@memID Note that I've "inherited" this sproc and the database. I am trying to insert a new record from my signup.aspx page. My SQLDataSource is as follows: <asp:SqlDataSource runat="server" ID="dsAddMember" ConnectionString="rmsdbuser" InsertCommandType="StoredProcedure" InsertCommand="Pro_members_Insert" ProviderName="System.Data.SqlClient"> The click handler for btnSave is as follows: Protected Sub btnSave_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSave.Click Try dsAddMember.DataBind() Catch ex As Exception End Try End Sub When I run this page, signup.aspx, provide required fields and click submit, the page simply reloads and the database table does not reflect the newly-inserted record. Questions: 1. How do I catch the error messages that might be returned from the sproc? 2. Please advise how to change signup.aspx so that the insert occurs. Thanks, Sid

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • How can I do more than one level of cascading deletes in Linq?

    - by Gary McGill
    If I have a Customers table linked to an Orders table, and I want to delete a customer and its corresponding orders, then I can do: dataContext.Orders.DeleteAllOnSubmit(customer.Orders); dataContext.Customers.DeleteOnSubmit(customer); ...which is great. However, what if I also have an OrderItems table, and I want to delete the order items for each of the orders deleted? I can see how I could use DeleteAllOnSubmit to cause the deletion of all the order items for a single order, but how can I do it for all the orders?

    Read the article

  • Reuse select query in a procedure in Oracle

    - by Jer
    How would I store the result of a select statement so I can reuse the results with an in clause for other queries? Here's some pseudo code: declare ids <type?>; begin ids := select id from table_with_ids; select * from table1 where id in (ids); select * from table2 where id in (ids); end; ... or will the optimizer do this for me if I simply put the sub-query in both select statements?

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >