Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 127/619 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to pass a reference to a string in JavaScript?

    - by ijjo
    Maybe a closure is my solution? Not exactly sure how to pull it off though. The code is set up like so: var globalVar = ''; var globalVar2 = ''; function func() { if (condition) func2(globalVar) else func2(globalVar2) } In func2() I cache some HTML in a main container into the appropriate global variable that I pass to it. Basically I have a main container that holds different pages depending on what tab they choose. For performance I want to cache the page into global vars so I need to know what tab is active to figure out which global var to assign the HTML to.

    Read the article

  • How do you pass objects between View Controllers in Objective-C?

    - by editor
    I've been trudging through some code for two days trying to figure out why I couldn't fetch a global NSMutableArray variable I declared in the .h and implemented in .m and set in a the viewDidLoad function. It finally dawned on me: there's no such thing as a global variable in Objective-C, at least not in the PHP sense I've come to know. I didn't ever really read the XCode error warnings, but there it was, even if not quite plain English: "Instance variable 'blah' accessed in class method." My question: What am I supposed to do now? I've got two View Controllers that need to access a central NSMutableDictionary I generate from a JSON file via URL. It's basically an extended menu for all my Table View drill downs, and I'd like to have couple other "global" (non-static) variables. Do I have to grab the JSON each time I want to generate this NSMutableDictionary or is there some way to set it once and access it from various classes via #import? Do I have to write data to a file, or is there another way people usually do this?

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by mingyeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Mysqldump causes "Too many connections"

    - by vbachev
    A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server. We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups. How can we avoid downtime during backups?Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?Is there an alternative to backing up with mysqldump?

    Read the article

  • DB2 users and groups

    - by Arun Srini
    Just want to know everyone's experience and take on managing users/authentication on a multi-node db2 cluster with users groups. I have 17 apps in production (project based company, only 2 online apps), and some 30 users with 7 groups. prodsel - group that has select privilege on all tables produpdt - update group on selective tables (as required by the apps) proddel - delete prodins - insert permissions for the group Now what my company does is when an app uses certain user (called app1user), and needs select and insert privilege on a table, they 1. grant select and insert for prodsel, prodins respectively 2. add the user under those two groups... now this creates one to many relationship between user and privileges, and this app1user also gets select on other tables granted for the prodsel group. I know this is wrong. Before I explain, I need to know how this is done elsewhere. Please share your experiences, even if you use other Databases that uses OS level authentication.

    Read the article

  • JZ0SJ: Metadata accessor information was not found on this database.

    - by Anthony Kong
    The complete message is: (JZ0SJ: Metadata accessor information was not found on this database. Please install the required tables as mentioned in the jConnect documentation The server is Adaptive Server Enterprise/15.5/EBF 18380 SMP ESD#3/P/x86_64/Enterprise Linux/asear155/2531/64-bit/FBO/Fri Jan 14 07:04:05 2011 Google search does not turn up definite answer to the problem. A lof of result are rather dated too. I checked the documentation but it does not mention any tables. What is the root cause of this problem? At the very least I would like to see a list of the required tables somewhere?

    Read the article

  • Bash script for mysql backup - error handling

    - by Jure1873
    I'm trying to backup a bunch of MyISAM tables in a way that would allow me to rsync/rdiff the backup directory to a remote location. I've came up with a script that dumps only the recently changed tables and sets the date of the file so that rsync can pick up only the changed ones, but now I don't know how to do the error handling - I would like the script to exit with a non 0 value if there are errors. How could I do that? #/bin/bash BKPDIR="/var/backups/db-mysql" mkdir -p $BKPDIR ERRORS=0 FIELDS="TABLE_SCHEMA, TABLE_NAME, UPDATE_TIME" W_COND="UPDATE_TIME >= DATE_ADD(CURDATE(), INTERVAL -2 DAY) AND TABLE_SCHEMA<>'information_schema'" mysql --skip-column-names -e "SELECT $FIELDS FROM information_schema.tables WHERE $W_COND;" | while read db table tstamp; do echo "DB: $db: TABLE: $table: ($tstamp)" mysqldump $db $table | gzip > $BKPDIR/$db-$table.sql.gz touch -d "$tstamp" $BKPDIR/$db-$table.sql.gz done exit $ERRORS

    Read the article

  • How to safely backup MySQL using VSS-based backup solutions

    - by Rhyven
    One of my clients is running MySQL on a Windows Server 2008 system. Their regular backups are performed using StorageCraft's ShadowCopy, which uses the VSS service to perform backups of open files. Some investigation indicates that MySQL is not entirely VSS-aware, and that the tables need to be locked prior to the shadow operation, then unlocked afterwards. There is a post at http://forum.storagecraft.com/Community/forums/p/548/2702.aspx which indicates the steps that need to be performed, however the user had some difficulty in performing them and no follow up solution was ever posted. Specifically, they succeeded in writing a batch file to lock the database, however once the batch file returns from MySQL it drops the connection and thus relinquishes the lock. I'm looking for a method that I can send the MySQL command FLUSH TABLES WITH READ LOCK, then perform the backup, then send UNLOCK TABLES when the backup is complete. Alternatively, I can exclude the MySQL data storage folder from backup, and schedule a mysqldump backup into a folder that will then be backed up by VSS. Can I have some recommendations please?

    Read the article

  • PostgreSQL 9: Does Vacuuming a table on the primary replicate on the mirror?

    - by Scott Herbert
    Running PostgreSQL 9.0.1, with streaming replication keeping one read-only mirror instance up to date. Auto-vaccuum is on on the primary, except for a few tables which are not vacuumed by the auto-vacuum daemon, in an effort to reduce business-hour IO. These tables are "materialised views". Each night at midnight, we run a vacuum across the database in order to clean up those tables that are excluded from the auto-vacuum. I'm wondering if that process replicates across to the mirror, or if I need to set up vacuum on the mirror as well?

    Read the article

  • PHPMyAdmin - Can I view actual field values instead of the foreign keys?

    - by Callum
    I have a database with some MyISAM tables. The tables contain foreign key relationships, but they are not inherently "enforced" (I believe "referential integrity" is the term). Using PHPMyAdmin, what I'd like to do is be able to view records from the tables but have all foreign keys replaced by whatever value they are representing. Eg, instead of seeing: 23 | 14 | $6.50 I'd like to see.. Extra Large | Thin Crust | $6.50 ($6.50 would be an editable field, the first two fields would not be). Any helpers? Thanks.

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • EXEC() syntax error using ODBC

    - by Mike Trader
    I have written a little ETL application that I wish to run a few lines of TSQL from. If i enter a simple query like "SELECT * FROM MyTable" everything is fine. All single line commands run as expected. A multiline query like this is also fine: DECLARE @TableName NVARCHAR(MAX) set @TableName = 'MyTable' EXECute ( 'DROP TABLE '+ @TableName ) Howevery when I try and run: DECLARE @TableName NVARCHAR(MAX) OPEN Tables FETCH NEXT FROM Tables INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN EXEC( 'DROP TABLE ' + @TableName ) FETCH NEXT FROM Tables INTO @TableName END I get a syntax error after TABLE in the EXEC() call. I have spent 6 hours trying to figure this out thinking perhaps I need to escape the single quote or something. I just cannot see the problem. A set of fresh eyes would be appreciated.

    Read the article

  • Randomize table guests in Excel

    - by Jo Voud
    I have a list of people: Column A: person A, person A guest, person B, person C, person C guest, ... Column B: 1, 1, 2, 3, 3, ... So in column A there is the person's name, column B gives a person a unique ID (the same id for their guest so we know that they are together). Now pretend we have a list of 100 people (also note that not all persons have guests) and we have to seat them. We have a list of tables (for example 10 * 4 person table and 10*6 person tables). We have to randomize that each person is assigned to a table and the guest is seated on the same table. What is the best way to do this? (it is also needed that I can generate this 4 times in a row without the same results, so when during the 4 courses of the diner the person are switching tables but not losing their guest).

    Read the article

  • Formula in table header cells

    - by Cylindric
    I have a table in Excel 2007 that I want to summarise, in a similar fashion to a Pivot Table, but for various reasons I can't use a pivot table. I like the "Format as table" features of sort and filter buttons, automatic formatting etc, so have used that to create a simple table: A B C N +-----------+------------+------------+-------+------------+ 1 | | 01/01/2010 | 01/02/2010 | ... | 01/12/2010 | +-----------+------------+------------+-------+------------+ 2 | CategoryA | 15 | 545 | | 634 | 3 | CategoryB | 32 | 332 | | 231 | 4 | CategoryC | 5 | 234 | | 644 | | ... | | | | | 27 | CategoryZ | 2 | 123 | | 64 | +-----------+------------+------------+-------+------------+ The numbers are retrieved from a "back-end" pivot table using GETPIVOTDATA(). All that works fine. Now, the problem is that I can't seem to use formulas for my column headings in these new "smart" tables - they are converted to text or just broken. For example if in B1 I put NOW(), I don't get the date, I get 00/01/1900. Is there any way of getting a formula to work in the auto tables? Or do I have to use standard tables and manually alternate-colour my rows etc?

    Read the article

  • MSWord table shading prints too dark

    - by Relaxed1
    My friend has a very light shading in his MSWord tables. However they still print too dark to read the text. When emailed to a colleague using the same printer, it prints light nicely. However they cannot find any setting that is different between them. Any ideas? Thanks! (P.s. for myself this would help for non-tables also, when 'highlighting' text. I do know that 'shading' gives more colour options for non-tables, but it would be nice to know anyway. Thanks)

    Read the article

  • db2 tablespace size and performance impact

    - by jrhickey
    Originally when we began moving to db2 LUW we ran into some issues where our tables were too big to fit into the default 4K table space. As a result of "pressure" to get it done we just went with a 32K default table space and put ALL of our tables there. What impact would that have if any? I talked to one person who said that it would possible make out database MUCH larger than it needed to be. Is that true? What about memory? Would there be any benefit to moving the smaller tables back to a 4K table space? I have looked around in forums and what not but cannot seem to find a good answer.

    Read the article

  • Scaling web application with SQL Server 2008 database

    - by John
    I have a database which has 90% of read only tables. 10% of the tables has writable data. We need to scale the ASP.NET application.We need to add more users who will not be writing to the database. We are thinking of adding another server and routing the users who need read only access to that server. Is there a way to replicate just some tables to another database server. Since the 90% of data doesnt change, we don't want to setup any full database replication. Please advise.

    Read the article

  • Unable to update the EntitySet because it has a DefiningQuery and no &lt;UpdateFunction&gt; element

    - by Harish Ranganathan
    When working with ADO.NET Entity Data Model, its often common that we generate entity schema for more than a single table from our Database.  With Entity Model generation automated with Visual Studio support, it becomes even tempting to create and work entity models to achieve an object mapping relationship. One of the errors that you might hit while trying to update an entity set either programmatically using context.SaveChanges or while using the automatic insert/update code generated by GridView etc., is “Unable to update the EntitySet <EntityName> because it has a DefiningQuery and no <UpdateFunction> element exists in the <ModificationFunctionMapping> element to support the current operation” While the description is pretty lengthy, the immediate thing that would come to our mind is to open our the entity model generated code and see if you can update it accordingly. However, the first thing to check if that, if the Entity Set is generated from a table, whether the Table defines a primary key.  Most of the times, we create tables with primary keys.  But some reference tables and tables which don’t have a primary key cannot be updated using the context of Entity and hence it would throw this error.  Unless it is a View, in which case, the default model is read-only, most of the times the above error occurs since there is no primary key defined in the table. There are other reasons why this error could popup which I am not going into for the sake of simplicity of the post.  If you find something new, please feel free to share it in comments. Hope this helps. Cheers !!!

    Read the article

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • Using transactions with LINQ-to-SQL

    - by Jalpesh P. Vadgama
    Today one of my colleague asked that how we can use transactions with the LINQ-to-SQL Classes when we use more then one entities updated at same time. It was a good question. Here is my answer for that.For ASP.NET 2.0  or higher version have a new class called TransactionScope which can be used to manage transaction with the LINQ. Let’s take a simple scenario we are having a shopping cart application in which we are storing details or particular order placed into the database using LINQ-to-SQL. There are two tables Order and OrderDetails which will have all the information related to order. Order will store particular information about orders while OrderDetails table will have product and quantity of product for particular order.We need to insert data in both tables as same time and if any errors comes then it should rollback the transaction. To use TransactionScope in above scenario first we have add a reference to System.Transactions like below. After adding the transaction we need to drag and drop the Order and Order Details tables into Linq-To-SQL Classes it will create entities for that. Below is the code for transaction scope to use mange transaction with Linq Context. MyContextDataContext objContext = new MyContextDataContext(); using (System.Transactions.TransactionScope tScope = new System.Transactions.TransactionScope(TransactionScopeOption.Required)) { objContext.Order.InsertOnSubmit(Order); objContext.OrderDetails.InsertOnSumbit(OrderDetails); objContext.SubmitChanges(); tScope.Complete(); } Here it will commit transaction only if using blocks will run successfully. Hope this will help you. Technorati Tags: Linq,Transaction,System.Transactions,ASP.NET

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >