Search Results

Search found 269 results on 11 pages for 'bigint'.

Page 3/11 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • SQL SERVER – Storing 64-bit Unsigned Integer Value in Database

    - by Pinal Dave
    Here is a very interesting question I received in an email just another day. Some questions just are so good that it makes me wonder how come I have not faced it first hand. Anyway here is the question - “Pinal, I am migrating my database from MySQL to SQL Server and I have faced unique situation. I have been using Unsigned 64-bit integer in MySQL but when I try to migrate that column to SQL Server, I am facing an issue as there is no datatype which I find appropriate for my column. It is now too late to change the datatype and I need immediate solution. One chain of thought was to change the data type of the column from Unsigned 64-bit (BIGINT) to VARCHAR(n) but that will just change the data type for me such that I will face quite a lot of performance related issues in future. In SQL Server we also have the BIGINT data type but that is Signed 64-bit datatype. BIGINT datatype in SQL Server have range of -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807). However, my digit is much larger than this number. Is there anyway, I can store my big 64-bit Unsigned Integer without loosing much of the performance of by converting it to VARCHAR.” Very interesting question, for the sake of the argument, we can ask user that there should be no need of such a big number or if you are taking about identity column I really doubt that if your table will grow beyond this table. Here the real question which I found interesting was how to store 64-bit unsigned integer value in SQL Server without converting it to String data type. After thinking a bit, I found a fairly simple answer. I can use NUMERIC data type. I can use NUMERIC(20) datatype for 64-bit unsigned integer value, NUMERIC(10) datatype for 32-bit unsigned integer value and NUMERIC(5) datatype for 16-bit unsigned integer value. Numeric datatype supports 38 maximum of 38 precision. Now here is another thing to keep in mind. Using NUMERIC datatype will indeed accept the 64-bit unsigned integer but in future if you try to enter negative value, it will also allow the same. Hence, you will need to put any additional constraint over column to only accept positive integer there. Here is another big concern, SQL Server will store the number as numeric and will treat that as a positive integer for all the practical purpose. You will have to write in your application logic to interpret that as a 64-bit Unsigned Integer. On another side if you are using unsigned integers in your application, there are good chance that you already have logic taking care of the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Datatype

    Read the article

  • postgresql deleteing old tables

    - by BB
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct -- DROP TABLE radacct; CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus -- DROP INDEX freesidestatus; CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx -- DROP INDEX radacct_active_user_idx; CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx -- DROP INDEX radacct_start_user_idx; CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • postgresql deleteing old records from log tables

    - by Max
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • Postgre varchar field between

    - by Anton
    I have an addresses table with ZIP code field which has type VARCHAR. I need to select all addresses form this table using ZIP codes range. If I used next code: select * from address where cast(zip as bigint) between 90210 and 90220 I get an error on fields where ZIP code cann't be cast as bigint. How I can resolve this issue?

    Read the article

  • How to convert DateTime to a number with a precision greater than days in T-SQL?

    - by Jader Dias
    Both queries below translates to the same number SELECT CONVERT(bigint,CONVERT(datetime,'2009-06-15 15:00:00')) SELECT CAST(CONVERT(datetime,'2009-06-15 23:01:00') as bigint) Result 39978 39978 The generated number will be different only if the days are different. There is any way to convert the DateTime to a more precise number, as we do in .NET with the .Ticks property? I need at least a minute precision.

    Read the article

  • Many-To-Many Query with Linq-To-NHibernate

    - by rjygraham
    Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help. I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure: CREATE TABLE [dbo].[Classes]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](100) NOT NULL, [StartDate] [datetime2](7) NOT NULL, [EndDate] [datetime2](7) NOT NULL, CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[Sections]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [ClassId] [bigint] NOT NULL, [InternalCode] [varchar](10) NOT NULL, CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[SectionStudents]( [SectionId] [bigint] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED ( [SectionId] ASC, [UserId] ASC ) CREATE TABLE [dbo].[aspnet_Users]( [ApplicationId] [uniqueidentifier] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, [UserName] [nvarchar](256) NOT NULL, [LoweredUserName] [nvarchar](256) NOT NULL, [MobileAlias] [nvarchar](16) NULL, [IsAnonymous] [bit] NOT NULL, [LastActivityDate] [datetime] NOT NULL, PRIMARY KEY NONCLUSTERED ( [UserId] ASC ) I omitted the foreign keys for brevity, but essentially this boils down to: A Class can have many Sections. A Section can belong to only 1 Class but can have many Students. A Student (aspnet_Users) can belong to many Sections. I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine. Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class. Here's what I've tried so far: 1. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.First(f => f.UserId == userId) != null select s); 2. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId select s); Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try. The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message: Value cannot be null. Parameter name: session I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way. Thanks in advance to anyone who can help. Ryan

    Read the article

  • Using Distinct or Not

    - by RPS
    In the below SQL Statement, should I be using DISTINCT as I have a Group By in my Where Clause? Thoughts? SELECT [OrderUser].OrderUserId, ISNULL(SUM(total.FileSize), 0), ISNULL(SUM(total.CompressedFileSize), 0) FROM ( SELECT DISTINCT ProductSize.OrderUserId, ProductSize.FileInfoId, CAST(ProductSize.FileSize AS BIGINT) AS FileSize, CAST(ProductSize.CompressedFileSize AS BIGINT) AS CompressedFileSize FROM ProductSize WITH (NOLOCK) INNER JOIN [Version] ON ProductSize.VersionId = [Version].VersionId ) AS total RIGHT OUTER JOIN [OrderUser] WITH (NOLOCK) ON total.OrderUserId = [OrderUser].OrderUserId WHERE NOT ([OrderUser].isCustomer = 1 AND [OrderUser].isEndOrderUser = 0 OR [OrderUser].isLocation = 1) AND [OrderUser].OrderUserId = 1 GROUP BY [OrderUser].OrderUserId

    Read the article

  • MySQL forgot about automatically creating an index for a foreign key?

    - by bobo
    After running the following SQL statements, you will see that, MySQL has automatically created the non-unique index question_tag_tag_id_tag_id on the tag_id column for me after the first ALTER TABLE statement has run. But after the second ALTER TABLE statement has run, I think MySQL should also automatically create another non-unique index question_tag_question_id_question_id on the question_id column for me. But as you can see from the SHOW INDEXES statement output, it's not there. Why does MySQL forget about the second ALTER TABLE statement? By the way, since I have already created a unique index question_id_tag_id_idx used by both question_id and tag_id columns. Is creating a separate index for each of them redundant? mysql> DROP DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> CREATE DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> USE mydatabase; Database changed mysql> CREATE TABLE question (id BIGINT AUTO_INCREMENT, html TEXT, PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tag (id BIGINT AUTO_INCREMENT, name VARCHAR(10) NOT NULL, UNIQUE INDEX name_idx (name), PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE question_tag (question_id BIGINT, tag_id BIGINT, UNIQUE INDEX question_id_tag_id_idx (question_id, tag_id), PRIMARY KEY(question_id, tag_id)) ENGINE = INNODB; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_tag_id_tag_id FOREIGN KEY (tag_id) REFERENCES tag(id); Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_question_id_question_id FOREIGN KEY (question_id) REFERENCES question(id); Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> SHOW INDEXES FROM question_tag; +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | question_tag | 0 | PRIMARY | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | PRIMARY | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 1 | question_tag_tag_id_tag_id | 1 | tag_id | A | 0 | NULL | NULL | | BTREE | | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 5 rows in set (0.01 sec) mysql>

    Read the article

  • Postgresql - Edit function signature

    - by drave
    POSTGRESQL 8.4.3 - i created a function with this signature CREATE OR REPLACE FUNCTION logcountforlasthour() RETURNS SETOF record AS realised i wanted to change it to this CREATE OR REPLACE FUNCTION logcountforlasthour() RETURNS TABLE(ip bigint, count bigint) record AS but when i apply that change in the query tool it isnt accepted or rather it is accepted, there is no syntax error, but the text of the function has not been changed. even if i run "DROP FUNCTION logcountforlasthour()" between edits the old syntax comes back if i edit the body of the function, thats fine, it changes but not the signature is there something i'm missing thanks

    Read the article

  • Need help with 2 MySql Queries. Join vs Subqueries.

    - by BugBusterX
    I have 2 tables: user: id, name message: sender_id, receiver_id, message, read_at, created_at There are 2 results I need to retrieve and I'm trying to find the best solution. I have included queries that I'm using in the very end. I need to retrieve a list of users, and also with each user have information available whether there are any unread messages from each user (them as sender, me as receiver) and whether or not there are any read messages between us ( they send I'm receiver or I send they are receivers) I need Same as above, but only those members where there has been any messaging between us, sorted by unread first, then by last message received. Can you advise please? Should this be done with joins or subqueries? In first case I do not need the count, I just need to know whether or not there is at least one unread message. I'm posting code and my current queries, please have a look when you get a chance: BTW, everything is the way I want in fist query. My concern is: In second query I would like to order by messages.created_at, but I dont think I can do that with grouping? And also I dont know if this approach is the most optimized and fast. CREATE TABLE `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `user` VALUES (1,'User 1'),(2,'User 2'),(3,'User 3'),(4,'User 4'),(5,'User 5'); CREATE TABLE `message` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `sender_id` bigint(20) DEFAULT NULL, `receiver_id` bigint(20) DEFAULT NULL, `message` text, `read_at` datetime DEFAULT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `message` VALUES (1,3,1,'Messge',NULL,'2010-10-10 10:10:10'),(2,1,4,'Hey','2010-10-10 10:10:12','2010-10-10 10:10:11'),(3,4,1,'Hello','2010-10-10 10:10:19','2010-10-10 10:10:15'),(4,1,4,'Again','2010-10-10 10:10:25','2010-10-10 10:10:21'),(5,3,1,'Hiii',NULL,'2010-10-10 10:10:21'); SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) GROUP BY u.id SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) where m.id IS NOT NULL GROUP BY u.id

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: class Array def shuffle ret = dup j = length i = 0 while j > 1 r = i + rand(j) ret[i], ret[r] = ret[r], ret[i] i += 1 j -= 1 end ret end end (0..9).to_a.shuffle.each{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle is used to efficiently provide random ordering. My problem is that shuffle needs to operate on an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. Imagine replacing (0..9) with (0..99**99). This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do? [Edit1]: Why do I want to do this? I'm trying to exhaust the search space of a hash algorithm for a N-length input string looking for partial collisions. Each number I generate is equivalent to a unique input string, entropy and all. Basically, I'm "counting" using a custom alphabet. [Edit2]: This means that f(x) in the above examples is a method that generates a hash and compares it to a constant, target hash for partial collisions. I do not need to store the value of x after I call f(x) so memory should remain constant over time. [Edit3/4/5/6]: Further clarification/fixes.

    Read the article

  • Creating a Mysql view to SELECT coloumns from different tables

    - by user330429
    I need help in constructing a VIEW on 4 tables. The view should contain coloumns: ER.ID, ER.EMPID, ER.CUSTID, ER.STATUS, ER.DATEREPORTED, ER.REPORT, EB.NAME, CR.CUSTNAME, CR.LOCID, CL.LOCNAME, DI.DEPTNAME ALIASES EMP_REPORT ER , EMP_BIO EB, CUST_RECORD CR, CUST_LOC CL, DEPT_ID DI THE DATA MODELS ARE: describe EMP_REPORT; +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | empid | int(11) | NO | | NULL | | | custid | int(11) | NO | | NULL | | | status | varchar(32) | NO | | NULL | | | datereported | bigint(20) | NO | | NULL | | | report | text | YES | | NULL | | +--------------+-------------+------+-----+---------+----------------+ describe EMP_BIO; +--------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+-------------+------+-----+---------+-------+ | empid | int(11) | NO | PRI | NULL | | | name | varchar(56) | NO | | NULL | | | sex | char(1) | NO | | NULL | | | deptid | int(11) | NO | | NULL | | | email | varchar(32) | NO | | NULL | | | mobile | bigint(20) | YES | | NULL | | | gtlk | varchar(32) | YES | | NULL | | | skype | varchar(32) | YES | | NULL | | | cvid | int(11) | YES | | NULL | | +--------+-------------+------+-----+---------+-------+ describe CUST_RECORD; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | custid | int(11) | NO | PRI | NULL | auto_increment | | custname | varchar(32) | NO | | NULL | | | address | varchar(255) | YES | | NULL | | | contactp | varchar(32) | YES | | NULL | | | mobile | bigint(20) | YES | | NULL | | | locid | int(11) | NO | | NULL | | | remarks | text | YES | | NULL | | | date | int(11) | YES | | NULL | | | addedby | int(11) | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ describe CUST_LOC; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | locid | int(11) | NO | PRI | 0 | | | locname | varchar(32) | NO | | NULL | | +---------+-------------+------+-----+---------+-------+ describe DEPT_ID; +----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+-------------+------+-----+---------+-------+ | deptid | int(11) | NO | | NULL | | | deptname | varchar(32) | YES | | NULL | | +----------+-------------+------+-----+---------+-------+ The table EMP_REPORT contains reports submitted by employees, all the coloumns in it needs to be fetched. The empid in this table should be used to fetch corresponding name in EMP_BIO (employee biodata) table. The custid in EMP_REPORT should be used to fetch corresponding locid in CUST_RECORD(customer record) which is used to fetch locname in CUST_LOC(customer location) table. The empid in EMP_REPORT is used to fetch corresponding deptid in EMP_BIO table which is then used to fetch corresponding deptname from DEPT_ID(department id) table. I tried constructing view using union of different select queries, but dint get proper results. Please help me. Thanks in advance. PS: sorry for my poor english

    Read the article

  • power and modulo on the fly for big numbers

    - by user unknown
    I raise some basis b to the power p and take the modulo m of that. Let's assume b=55170 or 55172 and m=3043839241 (which happens to be the square of 55171). The linux-calculator bc gives the results (we need this for control): echo "p=5606;b=55171;m=b*b;((b-1)^p)%m;((b+1)^p)%m" | bc 2734550616 309288627 Now calculating 55170^5606 gives a somewhat large number, but since I have to do a modulooperation, I can circumvent the usage of BigInt, I thought, because of: (a*b) % c == ((a%c) * (b%c))%c i.e. (9*7) % 5 == ((9%5) * (7%5))%5 => 63 % 5 == (4 * 2) %5 => 3 == 8 % 5 ... and a^d = a^(b+c) = a^b * a^c, therefore I can divide b+c by 2, which gives, for even or odd ds d/2 and d-(d/2), so for 8^5 I can calculate 8^2 * 8^3. So my (defective) method, which always cut's off the divisor on the fly looks like that: def powMod (b: Long, pot: Int, mod: Long) : Long = { if (pot == 1) b % mod else { val pot2 = pot/2 val pm1 = powMod (b, pot, mod) val pm2 = powMod (b, pot-pot2, mod) (pm1 * pm2) % mod } } and feeded with some values, powMod (55170, 5606, 3043839241L) res2: Long = 1885539617 powMod (55172, 5606, 3043839241L) res4: Long = 309288627 As we can see, the second result is exactly the same as the one above, but the first one looks quiet different. I'm doing a lot of such calculations, and they seem to be accurate as long as they stay in the range of Int, but I can't see any error. Using a BigInt works as well, but is way too slow: def calc2 (n: Int, pri: Long) = { val p: BigInt = pri val p3 = p * p val p1 = (p-1).pow (n) % (p3) val p2 = (p+1).pow (n) % (p3) print ("p1: " + p1 + " p2: " + p2) } calc2 (5606, 55171) p1: 2734550616 p2: 309288627 (same result as with bc) Can somebody see the error in powMod?

    Read the article

  • Denali Paging–Key seek lookups

    - by Dave Ballantyne
    In my previous post “Denali Paging – is it win.win ?” I demonstrated the use of using the Paging functionality within Denali.  On reflection,  I think i may of been a little unfair and should of continued always planned to continue my investigations to the next step. In Pauls article, he uses a combination of ctes to first scan the ordered keys which is then filtered using TOP and rownumber and then uses those keys to seek the data.  So what happens if we replace the scanning portion of the code with the denali paging functionality. Heres the original procedure,  we are going to replace the functionality of the Keys and SelectedKeys ctes : CREATE  PROCEDURE dbo.FetchPageKeySeek         @PageSize   BIGINT,         @PageNumber BIGINT AS BEGIN         -- Key-Seek algorithm         WITH    Keys         AS      (                 -- Step 1 : Number the rows from the non-clustered index                 -- Maximum number of rows = @PageNumber * @PageSize                 SELECT  TOP (@PageNumber * @PageSize)                         rn = ROW_NUMBER() OVER (ORDER BY P1.post_id ASC),                         P1.post_id                 FROM    dbo.Post P1                 ORDER   BY                         P1.post_id ASC                 ),                 SelectedKeys         AS      (                 -- Step 2 : Get the primary keys for the rows on the page we want                 -- Maximum number of rows from this stage = @PageSize                 SELECT  TOP (@PageSize)                         SK.rn,                         SK.post_id                 FROM    Keys SK                 WHERE   SK.rn > ((@PageNumber - 1) * @PageSize)                 ORDER   BY                         SK.post_id ASC                 )         SELECT  -- Step 3 : Retrieve the off-index data                 -- We will only have @PageSize rows by this stage                 SK.rn,                 P2.post_id,                 P2.thread_id,                 P2.member_id,                 P2.create_dt,                 P2.title,                 P2.body         FROM    SelectedKeys SK         JOIN    dbo.Post P2                 ON  P2.post_id = SK.post_id         ORDER   BY                 SK.post_id ASC; END; and here is the replacement procedure using paging: CREATE  PROCEDURE dbo.FetchOffsetPageKeySeek         @PageSize   BIGINT,         @PageNumber BIGINT AS BEGIN         -- Key-Seek algorithm         WITH    SelectedKeys         AS      (                 SELECT  post_id                 FROM    dbo.Post P1                 ORDER   BY post_id ASC                 OFFSET  @PageSize * (@PageNumber-1) ROWS                 FETCH NEXT @PageSize ROWS ONLY                 )         SELECT  P2.post_id,                 P2.thread_id,                 P2.member_id,                 P2.create_dt,                 P2.title,                 P2.body         FROM    SelectedKeys SK         JOIN    dbo.Post P2                 ON  P2.post_id = SK.post_id         ORDER   BY                 SK.post_id ASC; END; Notice how all i have done is replace the functionality with the Keys and SelectedKeys CTEs with the paging functionality. So , what is the comparative performance now ?. Exactly the same amount of IO and memory usage , but its now pretty obvious that in terms of CPU and overall duration we are onto a winner.    

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • PHP-MySQL: Arranging rows from seperate tables together/Expression to determine row origin

    - by Koroviev
    I'm new to PHP and have a two part question. I need to take rows from two separate tables, and arrange them in descending order by their date. The rows do not correspond in order or number and have no relationship with each other. ---EDIT--- They each contain updates on a site, one table holds text, links, dates, titles etc. from a blog. The other has titles, links, specifications, etc. from images. I want to arrange some basic information (title, date, small description) in an updates section on the main page of the site, and for it to be in order of date. Merging them into one table and modifying it to suit both types isn't what I'd like to do here, the blog table is Wordpress' standard wp_posts and I don't feel comfortable adding columns to make it suit the image table too. I'm afraid it could clash with upgrading later on and it seems like a clumsy solution (but that doesn't mean I'll object if people here advise me it's the best solution). ------EDIT 2------ Here are the DESCRIBES of each table: mysql> describe images; +---------+--------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------+--------------+------+-----+-------------------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | project | varchar(255) | NO | | NULL | | | title | varchar(255) | NO | | NULL | | | time | timestamp | NO | | CURRENT_TIMESTAMP | | | img_url | varchar(255) | NO | | NULL | | | alt_txt | varchar(255) | YES | | NULL | | | text | text | YES | | NULL | | | text_id | int(11) | YES | | NULL | | +---------+--------------+------+-----+-------------------+----------------+ mysql> DESCRIBE wp_posts; +-----------------------+---------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------------+---------------------+------+-----+---------------------+----------------+ | ID | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | post_author | bigint(20) unsigned | NO | | 0 | | | post_date | datetime | NO | | 0000-00-00 00:00:00 | | | post_date_gmt | datetime | NO | | 0000-00-00 00:00:00 | | | post_content | longtext | NO | | NULL | | | post_title | text | NO | | NULL | | | post_excerpt | text | NO | | NULL | | | post_status | varchar(20) | NO | | publish | | | comment_status | varchar(20) | NO | | open | | | ping_status | varchar(20) | NO | | open | | | post_password | varchar(20) | NO | | | | | post_name | varchar(200) | NO | MUL | | | | to_ping | text | NO | | NULL | | | pinged | text | NO | | NULL | | | post_modified | datetime | NO | | 0000-00-00 00:00:00 | | | post_modified_gmt | datetime | NO | | 0000-00-00 00:00:00 | | | post_content_filtered | text | NO | | NULL | | | post_parent | bigint(20) unsigned | NO | MUL | 0 | | | guid | varchar(255) | NO | | | | | menu_order | int(11) | NO | | 0 | | | post_type | varchar(20) | NO | MUL | post | | | post_mime_type | varchar(100) | NO | | | | | comment_count | bigint(20) | NO | | 0 | | +-----------------------+---------------------+------+-----+---------------------+----------------+ ---END EDIT--- I can do this easily with a single table like this (I include it here in case I'm using an over-elaborate method without knowing it): $content = mysql_query("SELECT post_title, post_text, post_date FROM posts ORDER BY post_date DESC"); while($row = mysql_fetch_array($content)) { echo $row['post_date'], $row['post_title'], $row['post_text']; } But how is it possible to call both tables into the same array to arrange them correctly? By correctly, I mean that they will intermix their echoed results based on their date. Maybe I'm looking at this from the wrong perspective, and calling them to a single array isn't the answer? Additionally, I need a way to form a conditional expression based on which table they came from, so that rows from table 1 get echoed differently than rows from table 2? I want results from table 1 to be echoed differently (with different strings concatenated around them, I mean) for the purpose of styling them differently than those from table two. And vice versa. I know an if...else statement would work here, but I have no idea how can I write the expression that would determine which table the row is from. All and any help is appreciated, thanks.

    Read the article

  • MySql - Get row number on select

    - by George
    Can I run a select statement and get the row number if the items are sorted? I have a table like this: mysql> describe orders; +-------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------------------+------+-----+---------+----------------+ | orderID | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | itemID | bigint(20) unsigned | NO | | NULL | | +-------------+---------------------+------+-----+---------+----------------+ I can then run this query to get the number of orders by ID: SELECT itemID, COUNT(*) as ordercount FROM orders GROUP BY itemID ORDER BY ordercount DESC; This gives me a count of each itemID in the table like this: +--------+------------+ | itemID | ordercount | +--------+------------+ | 388 | 3 | | 234 | 2 | | 3432 | 1 | | 693 | 1 | | 3459 | 1 | +--------+------------+ I want to get the row number as well, so I could tell that itemID 388 is the first row, 234 is second, etc (essentially the ranking of the orders, not just a raw count). I know I can do this in java when I get the result set back, but I was wondering if there was a way to handle it purely in SQL.

    Read the article

  • Hibernate: Parent/Child relationship in a single-table

    - by Dee
    I hardly see any pointer on the following problem related to Hibernate. This pertains to implementing inheritance using a single database table with a parent-child relationship to itself. For example: CREATE TABLE Employee ( empId BIGINT NOT NULL AUTO_INCREMENT, empName VARCHAR(100) NOT NULL, managerId BIGINT, CONSTRAINT pk_employee PRIMARY KEY (empId) ) Here, the managerId column may be null, or may point to another row of the Employee table. Business rule requires the Employee to know about all his reportees and for him to know about his/her manager. The business rules also allow rows to have null managerId (the CEO of the organisation doesn't have a manager). How do we map this relationship in Hibernate, standard many-to-one relationship doesn't work here? Example code would be appreciated.

    Read the article

  • Hibernate: Parent/Child relationship in a single-table

    - by Dee
    I hardly see any pointer on the following problem related to Hibernate. This pertains to implementing inheritance using a single database table with a parent-child relationship to itself. For example: CREATE TABLE Employee ( empId BIGINT NOT NULL AUTO_INCREMENT, empName VARCHAR(100) NOT NULL, managerId BIGINT, CONSTRAINT pk_employee PRIMARY KEY (empId) ) Here, the managerId column may be null, or may point to another row of the Employee table. Business rule requires the Employee to know about all his reportees and for him to know about his/her manager. The business rules also allow rows to have null managerId (the CEO of the organisation doesn't have a manager). How do we map this relationship in Hibernate, standard many-to-one relationship doesn't work here? Example code would be appreciated.

    Read the article

  • Entity Framework and Sql Server view question

    - by Sergio Romero
    Hi to all, For several reasons that I don't have the liberty to talk about, we are defining a view on our Sql Server 2005 database like so: CREATE VIEW [dbo].[MeterProvingStatisticsPoint] AS SELECT CAST(0 AS BIGINT) AS 'RowNumber', CAST(0 AS BIGINT) AS 'ProverTicketId', CAST(0 AS INT) AS 'ReportNumber', GETDATE() AS 'CompletedDateTime', CAST(1.1 AS float) AS 'MeterFactor', CAST(1.1 AS float) AS 'Density', CAST(1.1 AS float) AS 'FlowRate', CAST(1.1 AS float) AS 'Average', CAST(1.1 AS float) AS 'StandardDeviation', CAST(1.1 AS float) AS 'MeanPlus2XStandardDeviation', CAST(1.1 AS float) AS 'MeanMinus2XStandardDeviation' WHERE 0 = 1 The idea is that the Entity Framework will create an entity based on this query, which it does, but it generates it with an error that states the following: "warning 6002: The table/view 'Keystone_Local.dbo.MeterProvingStatisticsPoint' does not have a primary key defined. The key has been inferred and the definition was created as a read-only table/view." And it decides that the CompletedDateTime field will be this entity primary key. We are using EdmGen to generate the model. Is there a way not to have the entity framework include any field of this view as a primary key? Thanks for help.

    Read the article

  • 8 byte Integer with Doctrine and PHP

    - by Rufinus
    Hi, the players: 64bit linux with php 5 (ZendFramework 1.10.2) PostgreSQL 7.3 Doctrine 1.2 Via a Flash/Flex client i get an 8byte integer value. the field in the database is an BIGINT (8 byte) PHP_INT_SIZE show that system supports 8byte integer. printing out the value in the code as it is and as intval() leads to this: Plain: 1269452776100 intval: 1269452776099 float rounding failure ? but what really driving me nuts is ERROR: invalid input syntax for integer: "1269452776099.000000"' when i try to use it in a query. like: Doctrine_Core::getTable('table')->findBy('external_id',$external_id); or Doctrine_Core::getTable('table')->findBy('external_id',intval($external_id)); How i am supposed to handle this ? or how can i give doctrine a floating point number which it should use on a bigint field ? Any help is much appreciated! TIA

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >