Search Results

Search found 28043 results on 1122 pages for 'sql replication'.

Page 665/1122 | < Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Eclipselink and update trigger on multiple access to the database

    - by Raven
    Hi, in my project I have a database which many clients connect to. Concurrent access and writing works well. The problem now is not to reload the data every second from the database to always have the current status of the data. Does Eclipselink provide a trigger mechanism on (automatically?) reload the data if the database is changed? How would one use this trigger? Thanks!

    Read the article

  • How would I associate a "Note" class to 4+ classes without creating lookup table for each associatio

    - by Gthompson83
    Im creating a project tasklist application. I have project, section, task, issue classes, and would like to use one class to be able to add simple notes to any object instance of those classes. The task, issue tables both use a standard identity field as a primary key. The section table has a two field primary key. The project table has a single int primary key defined by the user. Is there a way to associate the note class with each of these without using a seperate lookup table for each class? I'm not so sure my original idea is a decent way to implement this. I considered the following (each variable mapping to a field n the notes table. Private _NoteId As Integer Private _ProjectId As Integer Private _SectionId As Integer Private _SectionId2 As Integer Private _TaskId As Integer Private _IssueId As Integer Private _Note As String Private _UserId As Guid Then I would be able to write seperate methods (getProjectNotes, getTaskNotes) to get notes attached to each class. I started writing this a few weeks ago but got pulled away before i could finish. When revisiting this code today my first thought "this is retarded". Thoughts on drawbacks to this design?

    Read the article

  • 100+ tables to joined

    - by deian
    Hi guys, I was wondering if anyone ever had a change to measure how a would 100 joined tables perform? Each table would have an ID column with primary index and all table are 1:1 related. It is a common problem within many data entry applications where we need to collect 1000+ data points. One solution would be to have one big table with 1000+ columns and the alternative would be to split them into multiple tables and join them when it is necessary. So perhaps more real question would be how 30 tables (30 columns each) would behave with multitable join. 500K-1M rows should be the expected size of the tables. Cheers

    Read the article

  • Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 I1,I2,I3 I1,I4 <---- DELETE since I1 exist in antecedent I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I4 I3,I2 is there anyway i can do that by query?

    Read the article

  • Select fields containing at least one non-space alphanumeric character

    - by zzapper
    (Sorry I know this is an old chestnut; I have found similar answers here but not an exact answer) These are frequent hand written queries from a console so I is what I am looking for is the easiest thing to type SELECT * FROM tbl_loyalty_card WHERE CUSTOMER_ID REGEXP "[0-9A-Z]"; or SELECT * FROM tbl_loyalty_card WHERE LENGTH(CUSTOMER_ID) >0; -- could match spaces Do you have anything quicker to type even if it's QAD?

    Read the article

  • allowing bb code but not java script

    - by user1405062
    Hello im trying to get the hang of using bb codes onto my normal php site ( not forum or anything just a normal site ) I have seen a few posts like this one http://www.pixel2life.com/forums/index.php?/topic/10659-php-bbcode-parser/ which says i need a bb parser. I was just wondering can anyone show me how i would use one ? I have a status box were users can update there status but i would like to allow bb codes but not java script. So when im doing my inserting into the db i strip the status like so.. $status= mysql_real_escape_string($_POST['status']); $status2= strip_tags($status); And that stops the java script and tags from getting though but i need the bb code tags to come though but carry on blocking the java script code is there anyway to do this ? Also then i just echo it out echo $status2 ; But just plain text shows so was just wondering if anyone knows how to let though bb code and stop java script and could some one show me how to use the bb parasher ? also need to know how to echo out the bb coding...

    Read the article

  • How can I "merge" or "flatten" results from a query which returns multiple rows into a single result

    - by dsm
    I have a simple query over a table, which returns results like the following: id id_type id_ref 2702 5 31 2702 16 14 2702 17 3 2702 40 1 2702 26 4 And I would like to merge the results into a single row, for instance: id concatenation 2702 5,16,17,40,26:31,14,3,1,4 Is there any way to do this within a trigger? NB: I know I can use a cursor, but I would really prefer not to unless there is no better way.

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Date/time query from Access table ( last month)

    - by chupeman
    Hello, I am using the query builder from Visual Studio 2008 to extract data from an Access mdb ( 2003), but I can't make it to work with a datetime field. When I run it with a third party query app I have works fine, but when I try to implement it into visual studio I can't do it. What is the correct way to extract last month data? This is what I have: SELECT [Datos].[ID], [Datos].[E-mail Address], [Datos].[ZIP/Postal Code], [Datos].[Store], [Datos].[date], [Datos].[gender], [Datos].[age] FROM [Datos] WHERE ([Datos].[date] =<|Last month|>) Any help is appreciated. Thank you

    Read the article

  • Maintaining stored procedures in source control

    - by dub
    How do you guys maintain your stored procedures? I'd like to keep versions of them for a few different reasons. I also will be setting up cruisecontrol.net and nant this weekend to automate builds. I was thinking about coding something that would generate the create scripts for all tables/sprocs/udf/xml schemas in my development database. Then it would take those scripts and update them in source control every couple hours.... Ideally, I'd like to make this some sort of plugin/module for cruisecontrol.net. Any other ideas?

    Read the article

  • making an SQL date column visible to a Drupal view

    - by Donal
    I'm trying to make a table visible to Views. One of the columns has type date (as opposed to a Unix timestamp). The example I initially tried to copy from is in modules/comment.views.inc in the Views module: // timestamp (when comment was posted) $data['comments']['timestamp'] = array( 'title' => t('Post date'), 'help' => t('Date and time of when the comment was posted.'), 'field' => array( 'handler' => 'views_handler_field_date', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort_date', ), 'filter' => array( 'handler' => 'views_handler_filter_date', ), ); This makes the dates, which are all in the past year or so, show up as "1 Jan 1970 00:33", so evidently a value of '2010-05-12', for example, is being interpreted as 2010 seconds past 1 Jan 1970 00:00. Can anyone point me to a correct way of exporting date columns? EDIT: I'm following up on some clues found at http://drupal.org/node/476774 .

    Read the article

  • Select rows where column LIKE dictionary word

    - by Gerve
    I have 2 tables: Dictionary - Contains roughly 36,000 words CREATE TABLE IF NOT EXISTS `dictionary` ( `word` varchar(255) NOT NULL, PRIMARY KEY (`word`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Datas - Contains roughly 100,000 rows CREATE TABLE IF NOT EXISTS `datas` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `hash` varchar(32) NOT NULL, `data` varchar(255) NOT NULL, `length` int(11) NOT NULL, `time` int(11) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `hash` (`hash`), KEY `data` (`data`), KEY `length` (`length`), KEY `time` (`time`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=105316 ; I would like to somehow select all the rows from datas where the column data contains 1 or more words. I understand this is a big ask, it would need to match all of these rows together in every combination possible, so it needs the best optimization. I have tried the below query, but it just hangs for ages: SELECT `datas`.*, `dictionary`.`word` FROM `datas`, `dictionary` WHERE `datas`.`data` LIKE CONCAT('%', `dictionary`.`word`, '%') AND LENGTH(`dictionary`.`word`) > 3 ORDER BY `length` ASC LIMIT 15 I have also tried something similar to the above with a left join, and on clause that specified the like statement.

    Read the article

  • [C#] How to create a constructor of a class that return a collection of instances of that class?

    - by codemonkie
    My program has the following class definition: public sealed class Subscriber { private subscription; public Subscriber(int id) { using (DataContext dc = new DataContext()) { this.subscription = dc._GetSubscription(id).SingleOrDefault(); } } } ,where _GetSubscription() is a sproc which returns a value of type ISingleResult<_GetSubscriptionResult> Say, I have a list of type List<int> full of 1000 ids and I want to create a collection of subscribers of type List<Subscriber>. How can I do that without calling the constructor in a loop for 1000 times? Since I am trying to avoid switching the DataContext on/off so frequently that may stress the database. TIA.

    Read the article

  • getting complete sql query in jython

    - by kdev
    result=sqlstring.executeQuery("select distinct table_name,owner from all_tables ") rs.append(str(i)+' , '+result.getString("table_name")+' , '+result.getString("owner")) If i want to display the query select * from all_tables or ' select count(*) from all_tables' how can i get the output to display . Please suggest thanks

    Read the article

  • CLR: Multi Param Aggregate, Argument not in Final Output?

    - by OMG Ponies
    Why is my delimiter not appearing in the final output? It's initialized to be a comma, but I only get ~5 white spaces between each attribute using: SELECT [article_id] , dbo.GROUP_CONCAT(0, t.tag_name, ',') AS col FROM [AdventureWorks].[dbo].[ARTICLE_TAG_XREF] atx JOIN [AdventureWorks].[dbo].[TAGS] t ON t.tag_id = atx.tag_id GROUP BY article_id The bit for DISTINCT works fine, but it operates within the Accumulate scope... Output: article_id | col ------------------------------------------------- 1 | a a b c I only have rudimentary C# API knowledge... C# Code: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Xml.Serialization; using System.Xml; using System.IO; using System.Collections; using System.Text; [Serializable] [SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = 8000)] public struct GROUP_CONCAT : IBinarySerialize { ArrayList list; string delimiter; public void Init() { list = new ArrayList(); delimiter = ","; } public void Accumulate(SqlBoolean isDistinct, SqlString Value, SqlString separator) { delimiter = (separator.IsNull) ? "," : separator.Value ; if (!Value.IsNull) { if (isDistinct) { if (!list.Contains(Value.Value)) { list.Add(Value.Value); } } else { list.Add(Value.Value); } } } public void Merge(GROUP_CONCAT Group) { list.AddRange(Group.list); } public SqlString Terminate() { string[] strings = new string[list.Count]; for (int i = 0; i < list.Count; i++) { strings[i] = list[i].ToString(); } return new SqlString(string.Join(delimiter, strings)); } #region IBinarySerialize Members public void Read(BinaryReader r) { int itemCount = r.ReadInt32(); list = new ArrayList(itemCount); for (int i = 0; i < itemCount; i++) { this.list.Add(r.ReadString()); } } public void Write(BinaryWriter w) { w.Write(list.Count); foreach (string s in list) { w.Write(s); } } #endregion }

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • How to store data with N columns

    - by iconiK
    I need a way to store an int for N columns. Basically what I have is this: Armies: ArmyID - UINT UnitCount1 - UINT UnitCount2 - UINT UnitCount3 - UINT UnitCount4 - UINT ... I can't possible add a column for each and every unit, so I need a fast way to store the number of each units in an army (you might have guesses it's for a game by now). Using XML is not an option as it will be dead slow.

    Read the article

  • Using Linq, how to separate a list in to grouped objects by name?

    - by Dr. Zim
    I have a table where a record looks like this varchar(255) Name varchar(255) Text varchar(255) Value Name is the DDL name, Text is what is displayed, and Value is returned upon selection. There are between one and twenty options for each Name. Without iterating though each option like a cursor, is there any way to pull out a list of objects, one for each unique DDL Name, using Linq and C#? A sample of the data: Beds '4 (10)' 4 Beds '5 (1)' 5 Beds '7 (1)' 7 Baths 'NA (13)' NULL Baths '0 (1)' 0 Baths '1 (13)' 1 I was thinking about doing an outer select to get the unique Names, then an inner select to get the list of options for it, then return the set as a List of a set of Lists.

    Read the article

  • count(*) vs count(row-name) - which is more correct?

    - by bread
    Does it make a difference if you do count(*) vs count(row-name) as in these two examples? I have a tendency to always write count(*) because it seems to fit better in my mind with the notion of it being an aggregate function, if that makes sense. But I'm not sure if it's technically best as I tend to see example code written without the * more often than not. count(*): select customerid, count(*), sum(price) from items_ordered group by customerid having count(*) > 1; vs. count(row-name): SELECT customerid, count(customerid), sum(price) FROM items_ordered GROUP BY customerid HAVING count(customerid) > 1;

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • MySQL Query to find consecutive available times of variable lenth

    - by Armaconn
    I have an events table that has user_id, date ('2013-10-01'), time ('04:15:00'), and status_id; What I am looking to find is a solution similar to http://stackoverflow.com/questions/2665574/find-consecutive-rows-calculate-duration but I need I need two additional components: 1) Take date into consideration, so 10/1/2013 at 11:00 PM - 10/2/2013 at 3:00AM. Feel free to just put in a fake date range (like '2013-10-01' to '2013-10-31') 2) Limit output to only include when there are 4+ consecutive times (each event is 15 minutes and I want it to display minimum blocks of an hour, but would also like to be able to switch this restriction to 1.5 hours or some other duration if possible). SUMMARY - Looking for a query that provides the start and end times for a set of events that have the same user_id, status_id, and are in a continuous series based on date and time. For which I can restrict results based on date range and minimum series duration. So the output should have: user_id, date_start, time_start, date_end, time_end, status_id, duration CREATE TABLE `events` ( `event_id` int(11) NOT NULL auto_increment COMMENT 'ID', `user_id` int(11) NOT NULL, `date` date NOT NULL, `time` time NOT NULL, `status_id` int(11) default NULL, PRIMARY KEY (`event_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1568 ; INSERT INTO `events` VALUES(1, 101, '2013-08-14', '23:00:00', 2); INSERT INTO `events` VALUES(2, 101, '2013-08-14', '23:15:00', 2); INSERT INTO `events` VALUES(3, 101, '2013-08-14', '23:30:00', 2); INSERT INTO `events` VALUES(4, 101, '2013-08-14', '23:45:00', 2); INSERT INTO `events` VALUES(5, 101, '2013-08-15', '00:00:00', 2); INSERT INTO `events` VALUES(6, 101, '2013-08-15', '00:15:00', 1); INSERT INTO `events` VALUES(7, 500, '2013-08-14', '23:45:00', 1); INSERT INTO `events` VALUES(8, 500, '2013-08-15', '00:00:00', 1); INSERT INTO `events` VALUES(9, 500, '2013-08-15', '00:15:00', 2); INSERT INTO `events` VALUES(10, 500, '2013-08-15', '00:30:00', 2); INSERT INTO `events` VALUES(11, 500, '2013-08-15', '00:45:00', 1); Desired output row |user_id | date_start | time_start | date_end | time_end | status_id | duration 1 |101 |'2013-08-14'| '23:00:00' |'2013-08-15'|'00:15:00'| 2 | 5 2 |101 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:30:00'| 1 | 1 3 |500 |'2013-08-14'| '00:23:45' |'2013-08-15'|'00:15:00'| 1 | 2 4 |500 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:45:00'| 2 | 2 5 |500 |'2013-08-15'| '00:00:45' |'2013-08-15'|'01:00:00'| 2 | 1 *except that rows 2 and 5 wouldn't appear if duration had to be greater than 30 minutes Thanks for any help that you can provide! And please let me know if there is anything I can further clarify!!

    Read the article

< Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >