Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 731/1289 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Highlight row in report?

    - by sanjeev40084
    I have a SSRS report which displays hundred of rows. I was wondering if there is anyway i can highlight the rows so that i can easily know on which row i am while accessing the report. Any thoughts?

    Read the article

  • PgJDBC: "no suitable driver found" when following tutorial, why?

    - by Celeritas
    I'm writing a Java program that queries a PostgreSQL database. I'm following this example and have trouble here: connection = DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/testdb", "mkyong", "123456"); According to the JavaDoc for DriverManager the first string is "a database url of the form jdbc:subprotocol:subname. When I connect to the server I type in psql -h dataserv.abc.company.com -d app -U emp24 and give the password qwe123 (for example sake). What should the first argument of getConnection be? I've tried connection = DriverManager.getConnection( "jdbc:postgresql://dataserv.abc.company.com", "emp24", "qwe123"); and get the run time error: no suitable driver found. I've download JDBC4 Postgresql Driver, Version 9.2-1000.

    Read the article

  • Advice on Linq to SQL mapping object design

    - by fearofawhackplanet
    I hope the title and following text are clear, I'm not very familiar with the correct terms so please correct me if I get anything wrong. I'm using Linq ORM for the first time and am wondering how to address the following. Say I have two DB tables: User ---- Id Name Phone ----- Id UserId Model The Linq code generator produces a bunch of entity classes. I then write my own classes and interfaces which wrap these Linq classes: class DatabaseUser : IUser { public DatabaseUser(User user) { _user = user; } public Guid Id { get { return _user.Id; } } ... etc } so far so good. Now it's easy enough to find a users phones from Phones.Where(p => p.User = user) but surely comsumers of the API shouldn't need to be writing their own Linq queries to get at data, so I should wrap this query in a function or property somewhere. So the question is, in this example, would you add a Phones property to IUser or not? In other words, should my interface specifically be modelling my database objects (in which case Phones doesn't belong in IUser), or are they actually simply providing a set of functions and properties which are conceptually associated with a User (in which case it does)? There seems drawbacks to both views, but I'm wondering if there is a standard approach to the problem. Or just any general words of wisdom you could share. My first thought was to use extension methods but in fact that doesn't work in this case.

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • mysql count rows and grop them by month

    - by user2661296
    I have a table called cc_calls and there I have many call records I want to count them and group them in months I have a timestamp called starttime and I can use that row to extract the month, also limit the count for 12 months the results should be like: Month Count January 768768 February 876786 March 987979 April 765765 May 898797 June 876876 July 786575 August 765765 September 689787 October 765879 November 897989 December 876876 Can anyone guide me or show me the mysql query that I need to get this result.

    Read the article

  • Stop invalid data in a attribute with foreign key constraint using triggers?

    - by Eternal Learner
    How to specify a trigger which checks if the data inserted into a tables foreign key attribute, actually exists in the references table. If it exist no action should be performed , else the trigger should delete the inserted tuple. Eg: Consider have 2 tables R(A int Primary Key) and S(B int Primary Key , A int Foreign Key References R(A) ) . I have written a trigger like this : Create Trigger DelS BEFORE INSERT ON S FOR EACH ROW BEGIN Delete FROM S where New.A <> ( Select * from R;) ); End; I am sure I am making a mistake while specifying the inner sub query within the Begin and end Blocks of the trigger. My question is how do I make such a trigger ?

    Read the article

  • Database concurrency issue in .NET application

    - by MC.
    If userA deleted OrderA while userB is modifying OrderA, then userB saves OrderA then there is no order in the database to be updated. My problem is there is no error! The SqlDataAdapter.Update succeeds and returns a "1" indicating a record was modified when this is not true. Does anybody know how this is supposed to work, thanks.

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

  • sql data source

    - by George
    I have a table (EmployeeID,EmployeeName,ManagerID) How can I create a sqldatasource to include the ManagerName from the EmployeeName given EmployeeID = ManagerID? In my gridview after dragging a dropdownlist what bindings should I do to display the managerName? Is it possible to use it without writing custom select,insert,delete,update? If not what are the steps I need to do to write the whole thing i.e. custom grid and source? Thank you very much

    Read the article

  • Can I have a CASE statement within a WHILE loop?

    - by John
    This is what I'm doing: while (@counter < 3 and @newBalance >0) begin CASE when @counter = 1 then ( @monFee1 = @monthlyFee, @newBalance = @newBalance-@fee) when @counter = 2 then ( @monFee2 = @monthlyFee, @newBalance = @newBalance-@fee) END @counter = @counter +1 end I get this error: Incorrect syntax near the keyword 'CASE'. No idea why. Please help!

    Read the article

  • how to save html to a database field

    - by ooo
    i have an tiny editor web page where my users can use this editor and i am saving the html into my database. i am having issues saving this html to my database. for example if there is a name with a "'" or if there are other html character "<,","" etc, my code seems to blow up on the insert. Is there any best practices about taking any arbitrary html and have it persist fully to a db field without worrying about any specific characters.

    Read the article

  • How can i learn Table Name in database an column name?

    - by Phsika
    How can i learn table Name in database an how can i learn any Table's Column name? SELECT Col.COLUMN_NAME, Col.DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS AS Col LEFT OUTER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE AS Usg ON Col.TABLE_NAME = Usg.TABLE_NAME AND Col.COLUMN_NAME = Usg.COLUMN_NAME LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS Con ON Usg.CONSTRAINT_NAME = Con.CONSTRAINT_NAME WHERE Col.TABLE_NAME = 'Addresses_Temp' AND Con.Constraint_TYPE = 'PRIMARY KEY' But it returns to me empty data:(

    Read the article

  • Migrating Data to MSSQL 2008

    - by Fred Clown
    I am trying to migrate data from an Informix database to MSSQL 2008. I've got quite a lot of data to move. I've been try multiple methods to get the data over, and so far SQLBulkCopy in multiple chunks seems to be the fastest that I can find. Does anyone know of a faster means of getting the data over? I'm trying to cut down on the transfer time so that on my cut-over date I don't run out of time to do the full cut-over. Thanks.

    Read the article

  • Avoiding repeated subqueries when 'WITH' is unavailable

    - by EloquentGeek
    MySQL v5.0.58. Tables, with foreign key constraints etc and other non-relevant details omitted for brevity: CREATE TABLE `fixture` ( `id` int(11) NOT NULL auto_increment, `competition_id` int(11) NOT NULL, `name` varchar(50) NOT NULL, `scheduled` datetime default NULL, `played` datetime default NULL, PRIMARY KEY (`id`) ); CREATE TABLE `result` ( `id` int(11) NOT NULL auto_increment, `fixture_id` int(11) NOT NULL, `team_id` int(11) NOT NULL, `score` int(11) NOT NULL, `place` int(11) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `team` ( `id` int(11) NOT NULL auto_increment, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`) ); Where: A draw will set result.place to 0 result.place will otherwise contain an integer representing first place, second place, and so on The task is to return a string describing the most recently played result in a given competition for a given team. The format should be "def Team X,Team Y" if the given team was victorious, "lost to Team X" if the given team lost, and "drew with Team X" if there was a draw. And yes, in theory there could be more than two teams per fixture (though 1 v 1 will be the most common case). This works, but feels really inefficient: SELECT CONCAT( (SELECT CASE `result`.`place` WHEN 0 THEN "drew with" WHEN 1 THEN "def" ELSE "lost to" END FROM `result` WHERE `result`.`fixture_id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `result`.`team_id` = 1), ' ', (SELECT GROUP_CONCAT(`team`.`name`) FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` LEFT JOIN `team` ON `result`.`team_id` = `team`.`id` WHERE `fixture`.`id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `team`.`id` != 1) ) Have I missed something really obvious, or should I simply not try to do this in one query? Or does the current difficulty reflect a poor table design?

    Read the article

  • T-SQL Query, combine columns from multiple rows into single column

    - by Shayne
    I have seeen some examples of what I am trying to do using COALESCE and FOR XML (seems like the better solution). I just can't quite get the syntax right. Here is what I have (I will shorten the fields to only the key ones): Table Fields ------ ------------------------------- Requisition ID, Number IssuedPO ID, Number Job ID, Number Job_Activity ID, JobID (fkey) RequisitionItems ID, RequisitionID(fkey), IssuedPOID(fkey), Job_ActivityID (fkey) I need a query that will list ONE Requisition per line with its associated Jobs and IssuedPOs. (The requisition number start with "R-" and the Job Number start with "J-"). Example: R-123 | "PO1; PO2; PO3" | "J-12345; J-6780" Sure thing Adam! Here is a query that returns multiple rows. I have to use outer joins, since not all Requisitions have RequisitionItems that are assigned to Jobs and/or IssuedPOs (in that case their fkey IDs would just be null of course). SELECT DISTINCT Requisition.Number, IssuedPO.Number, Job.Number FROM Requisition INNER JOIN RequisitionItem on RequisitionItem.RequisitionID = Requisition.ID LEFT OUTER JOIN Job_Activity on RequisitionItem.JobActivityID = Job_Activity.ID LEFT OUTER JOIN Job on Job_Activity.JobID = Job.ID LEFT OUTER JOIN IssuedPO on RequisitionItem.IssuedPOID = IssuedPO.ID

    Read the article

  • sqlite3.OperationalError: database is locked - non-threaded application

    - by James C
    Hi, I have a Python application which throws the standard sqlite3.OperationalError: database is locked error. I have looked around the internet and could not find any solution which worked (please note that there is no multiprocesses/threading going on, and as you can see I have tried raising the timeout parameter). The sqlite file is stored on the local hard drive. The following function is one of many which accesses the sqlite database, and runs fine the first time it is called, but throws the above error the second time it is called (it is called as part of a for loop in another function): def update_index(filepath): path = get_setting('Local', 'web') stat = os.stat(filepath) modified = stat.st_mtime index_file = get_setting('Local', 'index') connection = sqlite3.connect(index_file, 30) cursor = connection.cursor() head, tail = os.path.split(filepath) cursor.execute('UPDATE hwlive SET date=? WHERE path=? AND name=?;', (modified, head, tail)) connection.commit() connection.close() Many thanks.

    Read the article

  • How to do this query?

    - by Damiano
    Hello everybody! I have a mysql table with these columns: ID (auto-increment) ID_BOOK (int) PRICE (double) DATA (date) I know two ID_BOOK values, example, 1 and 2. QUERY: I have to extract all the PRICE (of the ID_BOOK=1 and ID_BOOK=2) where DATA is the same! Table example: 1 1 10.00 2010-05-16 2 1 11.00 2010-05-15 3 1 12.00 2010-05-14 4 2 18.00 2010-05-16 5 2 11.50 2010-05-15 Result example: 1 1 10.00 2010-05-16 4 2 18.00 2010-05-16 2 1 11.00 2010-05-15 5 2 11.50 2010-05-15 ID_BOOK=2 hasn't 2010-05-14 so i jump it. Thank you so much!

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Table for each region in MySQL

    - by King Wu
    There are four regions with more than one million records total. Should I create a table with a region column or a table for each region and combine them to get the top ranks? If I combine all four regions, none of my columns will be unique so I will need to also add an id column for my primary key. Otherwise, name, accountId & characterId would be candidate keys or should I just add an id column anyways. Table: ---------------------------------------------------------------- | name | accountId | iconId | level | characterId | updateDate | ----------------------------------------------------------------

    Read the article

  • How To Join Tables from Two Different Contexts with LINQ2SQL?

    - by RSolberg
    I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL? Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry. Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream. Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product. This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context? var query = (from c in contextA.Cases where monitoredInboxList.Contains(c.INBOXES.inboxName) select new { //setup fields here... });

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >