Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 3/398 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Optimizing for speed - 4 dimensional array lookup in C

    - by Tiago
    I have a fitness function that is scoring the values on an int array based on data that lies on a 4D array. The profiler says this function is using 80% of CPU time (it needs to be called several million times). I can't seem to optimize it further (if it's even possible). Here is the function: unsigned int lookup_array[26][26][26][26]; /* lookup_array is a global variable */ unsigned int get_i_score(unsigned int *input) { register unsigned int i, score = 0; for(i = len - 3; i--; ) score += lookup_array[input[i]][input[i + 1]][input[i + 2]][input[i + 3]]; return(score) } I've tried to flatten the array to a single dimension but there was no improvement in performance. This is running on an IA32 CPU. Any CPU specific optimizations are also helpful. Thanks

    Read the article

  • SSRS 2008 Need to lookup customer name with largest order

    - by Chris
    Hi, I'm creating an SSRS report which contains a table of orders, grouped by day. Now I can easily get the max order value for the day and put it in the group header by using the SSRS MAX() function. However, I also want to get the corresponding customer name who placed this order, and place this in the group header too. We can assume my result set simply contains date, name and order value. Is there any way to do this in SSRS 2008? Thanks

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why not use tables for layout in HTML?

    - by Bno
    It seems to be the general opinion that tables should not be used for layout in HTML. Why? I have never (or rarely to be honest) seen good arguments for this. The usual answers are: It's good to separate content from layoutBut this is a fallacious argument; Cliche Thinking. I guess it's true that using the table element for layout has little to do with tabular data. So what? Does my boss care? Do my users care?Perhaps me or my fellow developers who have to maintain a web page care... Is a table less maintainable? I think using a table is easier than using divs and CSS.By the way... why is using a div or a span good separation of content from layout and a table not? Getting a good layout with only divs often requires a lot of nested divs. Readability of the codeI think it's the other way around. Most people understand html, few understand CSS. It's better for SEO not to use tablesWhy? Can anybody show some evidence that it is? Or a statement from Google that tables are discouraged from an SEO perspective? Tables are slower.An extra tbody element has to be inserted. This is peanuts for modern web browsers. Show me some benchmarks where the use of a table significantly slows down a page. A layout overhaul is easier without tables, see css Zen Garden.Most web sites that need an upgrade need new content (html) as well. Scenarios where a new version of a web site only needs a new CSS file are not very likely. Zen Garden is a nice web site, but a bit theoretical. Not to mention its misuse of CSS. I am really interested in good arguments to use divs + CSS instead of tables.

    Read the article

  • In Oracle 10g, how do I see tables in schemes other than system

    - by Yasir Arsanukaev
    In the hr schema there's a table emloyees, I can query data from it specifying it implicitly SELECT count(*) FROM hr.employees while logged in as system, but when I navigate to Home-Object Browser in the browser I can't see the table employees and other tables in other schemas. Can I manage tables in schemas other than system while logged in as system? Maybe there's some setting which enables me to group tables by schema. My Oracle Database version is 10.2.0.1 Express Edition. IIRC I could see all tables in Oracle Database 10.1.x.y. Thanks.

    Read the article

  • Vertically Merge Multiple Tables in MySQL by Joint Primary Key

    - by world
    Hello, I'll attempt to make my question as clear as possible. I'm fairly unexperienced with SQL, only know the really basic queries. In order to have a better idea I'd been reading the MySQL manual for the past few days, but I couldn't really find a concrete solution and my needs are quite specific. I've got 3 MySQL MyISAM tables: table1, table2 and table3. Each table has an ID column (ID, ID2, ID3 respectively), and different data columns. For example table1 has [ID, Name, Birthday, Status, ...] columns, table2 has [ID2, Country, Zip, ...], table3 has [ID3, Source, Phone, ...] you get the idea. The ID, ID2, ID3 columns are common to all three tables... if there's an ID value in table1 it will also appear in table2 and table3. The number of rows in these tables is identical, about 10m rows in each table. What I'd like to do is create a new table that contains (most of) the columns of all three tables and merge them into it. The dates, for instance, must be converted because right now they're in VARCHAR YYYYMMDD format. Reading the MySQL manual I figured STR_TO_DATE() would do the job, but I don't know how to write the query itself in the first place so I have no idea how to integrate the date conversion. So basically, after I create the new table (which I do know how to do), how can I merge the three tables into it, integrating into the query the date conversion?

    Read the article

  • Alternative to Windows Azure tables out of the cloud

    - by John Grey
    Hi :) I'm developing a .NET app, which needs to run both on Azure and on regular Windows Servers(2003). It needs to store a few GB of data and SQL Azure is too expensive for me, so I'll use Azure tables in the cloud version. Can you recommend a storage solution, which will run on standalone servers and have an API and behavior similar to Azure tables? From what I've seen Server AppFabric does not include Tables. TIA

    Read the article

  • Cut in excel doesn't work, and copying tables from one program to another returns text

    - by Kristina
    My excel 2007 on Windows 7 operating system seems to have a probelm with regular cut function. when I highlight cells I want to cut and press cut (either on keyboard shortcut Ctrl+x, Home menu cut command, or from the right-click menu) cells start flashing for a split second and after that they only turn normal. When I want to paste them, they past as if copy function was used. If I try to rightclick to use function "insert cut cells" it is not one of the offered options at all. On my home computer I have same combination, Excel 2007 on windows 7 and it works just fine. COuld the problem be due to 64-bit win7 version at my job, and 32-bit version at home? Another problem is when I copy table from excel to word, in word pasting results in unformatted text instead of table as it was in excel. Did someone have such problems and can offer a solution? Thanx a lot.

    Read the article

  • Separate tables or single table with queries?

    - by Joe
    I'm making an employee information database. I need to handle separated employees. Should I a. set up a query with a macro to send separated employees to a separate table, or b. just add a flag to the single table denoting separation? I understand that it's best practice to take choice b, and the one reason I can think of for this is that any structural changes I make to the table later will have to be done in both places. But it also seems like setting up a flag forces me to filter out that flag for basically every useful query I'm going to make in the future.

    Read the article

  • DNS lookup when using a CDN

    - by Steven Wu
    Using a CDN can vastly improve the load time of a website. I been thinking of using it to host all my external files like CSS, JS, Images, Videos etc. However I was thinking when linking to a CDN, wouldn't the browser have to use additional DNS lookup? So wouldn't this be counter productive? Or is the benefit to host every external files on a CDN out weighs the additional cost of a DNS lookup? What are your thoughts?

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • Maximum number of workable tables in SQL Server And MySQL

    - by Kibbee
    I know that in SQL Server, the maximum number of "objects" in a database is a little over 2 billion. Objects contains tables, views, stored procedures, indexes, among other things . I'm not at all worried about going beyond 2 billion objects. However, what I would like to know, is, does SQL Server suffer a performance hit from having a large number of tables. Does each table you add have a performance hit, or is there basically no difference (assuming constant amount of data). Does anybody have any experience working with databases with thousands of tables? I'm also wondering the same about MySQL.

    Read the article

  • Silverlight and Azure Tables

    - by Phil Wright
    Of the following two options... 1, Silverlight app talks directly to Azure Tables 2, Silverlight app talks to Web Role using WCF and that Web Role accesses Azure Tables Which are possible? Which is the recommend approach?

    Read the article

  • PHP CMS with ability to create custom tables

    - by Cracker
    I am building a website. I have created the database in MySQL. I need to build the web pages really fast! Is there a PHP CMS with which I can easily create the webpages with forms that can modify my database tables? The point is that I don't want to code it using plain PHP or MVC frameworks either. I looked at other CMSs' like Drupal and Joomla but it looks like its difficult to make them use custom tables.

    Read the article

  • Refactoring this code that produces a reverse-lookup hash from another hash

    - by Frank Joseph Mattia
    This code is based on the idea of a Form Object http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/ (see #3 if unfamiliar with the concept). My actual code in question may be found here: https://gist.github.com/frankjmattia/82a9945f30bde29eba88 The code takes a hash of objects/attributes and creates a reverse lookup hash to keep track of their delegations to do this. delegate :first_name, :email, to: :user, prefix: true But I am manually creating the delegations from a hash like this: DELEGATIONS = { user: [ :first_name, :email ] } At runtime when I want to look up the translated attribute names for the objects, all I have to go on are the delegated/prefixed (have to use a prefix to avoid naming collisions) attribute names like :user_first_name which aren't in sync with the rails i18n way of doing it: en: activerecord: attributes: user: email: 'Email Address' The code I have take the above delegations hash and turns it into a lookup table so when I override human_attribute_name I can get back the original attribute name and its class. Then I send #human_attribute_name to the original class with the original attribute name as its argument. The code I've come up with works but it is ugly to say the least. I've never really used #inject so this was a crash course for me and am quite unsure if this code effective way of solving my problem. Could someone recommend a simpler solution that does not require a reverse lookup table or does that seem like the right way to go? Thanks, - FJM

    Read the article

  • DQL delete from multiple tables (doctrine)

    - by singer
    Need to perform DQL delete from multple related tables. In SQL it is something like this: DELETE r1,r2 FROM ComRealty_objects r1, com_realty_objects_phones r2 WHERE r1.id IN (10,20) AND r2.id_object IN (10,20) I need to perform this statement using DQL, but I'm stuck on this :( <?php $dql = Doctrine_Query::create() ->delete('phones, comrealtyobjects') ->from('ComRealtyObjects comrealtyobjects') ->from('ComRealtyObjectsPhones phones') ->whereIn("comrealtyobjects.id", $ids) ->whereIn("phones.id_object", $ids); echo($dql->getSqlQuery()); ?> But DQL parser gives me this result: DELETE FROM `com_realty_objects_phones`, `ComRealty_objects` WHERE (`id` IN (?) AND `id_object` IN (?)) Searching google and stack overflow I found this(useful) topic: http://stackoverflow.com/questions/2247905/what-is-the-syntax-for-a-multi-table-delete-on-a-mysql-database-using-doctrine But this is not exactly my case - there was delete from single table. If there is a way to override dql parser behaviour? Or maybe some other way to delete records from multiple tables using doctrine. Note: If you are using doctrine behaviours(Doctrine_Record_Generator) you need first to initialize those tables using Doctrine_Core::initializeModels() to perform DQL operations on them.

    Read the article

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • NHibernate / Fluent - Mapping multiple objects to single lookup table

    - by Al
    Hi all I am struggling a little in getting my mapping right. What I have is a single self joined table of look up values of certain types. Each lookup can have a parent, which can be of a different type. For simplicities sake lets take the Country and State example. So the lookup table would look like this: Lookups Id Key Value LookupType ParentId - self joining to Id base class public class Lookup : BaseEntity { public Lookup() {} public Lookup(string key, string value) { Key = key; Value = value; } public virtual Lookup Parent { get; set; } [DomainSignature] [NotNullNotEmpty] public virtual LookupType LookupType { get; set; } [NotNullNotEmpty] public virtual string Key { get; set; } [NotNullNotEmpty] public virtual string Value { get; set; } } The lookup map public class LookupMap : IAutoMappingOverride<DBLookup> { public void Override(AutoMapping<Lookup> map) { map.Table("Lookups"); map.References(x => x.Parent, "ParentId").ForeignKey("Id"); map.DiscriminateSubClassesOnColumn<string>("LookupType").CustomType(typeof(LookupType)); } } BASE SubClass map for subclasses public class BaseLookupMap : SubclassMap where T : DBLookup { protected BaseLookupMap() { } protected BaseLookupMap(LookupType lookupType) { DiscriminatorValue(lookupType); Table("Lookups"); } } Example subclass map public class StateMap : BaseLookupMap<State> { protected StateMap() : base(LookupType.State) { } } Now I've almost got my mappings set, however the mapping is still expecting a table-per-class setup, so is expecting a 'State' table to exist with a reference to the states Id in the Lookup table. I hope this makes sense. This doesn't seem like an uncommon approach when wanting to keep lookup-type values configurable. Thanks in advance. Al

    Read the article

  • Join 2 children tables with a parent tables without duplicated

    - by user1847866
    Problem I have 3 tables: People, Phones and Emails. Each person has an UNIQUE ID, and each person can have multiple numbers or multiple emails. Simplified it looks like this: +---------+----------+ | ID | Name | +---------+----------+ | 5000003 | Amy | | 5000004 | George | | 5000005 | John | | 5000008 | Steven | | 8000009 | Ashley | +---------+----------+ +---------+-----------------+ | ID | Number | +---------+-----------------+ | 5000005 | 5551234 | | 5000005 | 5154324 | | 5000008 | 2487312 | | 8000009 | 7134584 | | 5000008 | 8451384 | +---------+-----------------+ +---------+------------------------------+ | ID | Email | +---------+------------------------------+ | 5000005 | [email protected] | | 5000005 | [email protected] | | 5000008 | [email protected] | | 5000008 | [email protected] | | 5000008 | [email protected] | | 8000009 | [email protected] | | 5000004 | [email protected] | +---------+------------------------------+ I am trying to joining them together without duplicates. It works great, when I try to join only Emails with People or only Phones with People. SELECT People.Name, People.ID, Phones.Number FROM People LEFT OUTER JOIN Phones ON People.ID=Phones.ID ORDER BY Name, ID, Number; +----------+---------+-----------------+ | Name | ID | Number | +----------+---------+-----------------+ | Steven | 5000008 | 8451384 | | Steven | 5000008 | 24887312 | | John | 5000005 | 5551234 | | John | 5000005 | 5154324 | | George | 5000004 | NULL | | Ashley | 8000009 | 7134584 | | Amy | 5000003 | NULL | +----------+---------+-----------------+ SELECT People.Name, People.ID, Emails.Email FROM People LEFT OUTER JOIN Emails ON People.ID=Emails.ID ORDER BY Name, ID, Email; +----------+---------+------------------------------+ | Name | ID | Email | +----------+---------+------------------------------+ | Steven | 5000008 | [email protected] | | Steven | 5000008 | [email protected] | | Steven | 5000008 | [email protected] | | John | 5000005 | [email protected] | | John | 5000005 | [email protected] | | George | 5000004 | [email protected] | | Ashley | 8000009 | [email protected] | | Amy | 5000003 | NULL | +----------+---------+------------------------------+ However, when I try to join Emails and Phones on People - I get this: SELECT People.Name, People.ID, Phones.Number, Emails.Email FROM People LEFT OUTER JOIN Phones ON People.ID = Phones.ID LEFT OUTER JOIN Emails ON People.ID = Emails.ID ORDER BY Name, ID, Number, Email; +----------+---------+-----------------+------------------------------+ | Name | ID | Number | Email | +----------+---------+-----------------+------------------------------+ | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | John | 5000005 | 5551234 | [email protected] | | John | 5000005 | 5551234 | [email protected] | | John | 5000005 | 5154324 | [email protected] | | John | 5000005 | 5154324 | [email protected] | | George | 5000004 | NULL | [email protected] | | Ashley | 8000009 | 7134584 | [email protected] | | Amy | 5000003 | NULL | NULL | +----------+---------+-----------------+------------------------------+ What happens is - if a Person has 2 numbers, all his emails are shown twice (They can not be sorted! which means they can not be removed by @last) What I want: Bottom line, playing with the @last, I want to end up with somethig like this, but @last won't work if I don't arrange ORDER columns in the righ way - and this seems like a big problem..Orderin the email column. Because seen from the example above: Steven has 2 phone number and 3 emails. The JOIN Emails with Numbers happens with each email - thus duplicated values that can not be sorted (SORT BY does not work on them). **THIS IS WHAT I WANT** +----------+---------+-----------------+------------------------------+ | Name | ID | Number | Email | +----------+---------+-----------------+------------------------------+ | Steven | 5000008 | 8451384 | [email protected] | | | | 24887312 | [email protected] | | | | | [email protected] | | John | 5000005 | 5551234 | [email protected] | | | | 5154324 | [email protected] | | George | 5000004 | NULL | [email protected] | | Ashley | 8000009 | 7134584 | [email protected] | | Amy | 5000003 | NULL | NULL | +----------+---------+-----------------+------------------------------+ Now I'm told that it's best to keep emails and number in separated tables because one can have many emails. So if it's such a common thing to do, what isn't there a simple solution? I'd be happy with a PHP Solution aswell. What I know how to do by now that satisfies it, but is not as pretty. If I do it with GROUP_CONTACT I geat a satisfactory result, but it doesn't look as pretty: I can't put a "Email type = work" next to it. SELECT People.Ime, GROUP_CONCAT(DISTINCT Phones.Number), GROUP_CONCAT(DISTINCT Emails.Email) FROM People LEFT OUTER JOIN Phones ON People.ID=Phones.ID LEFT OUTER JOIN Emails ON People.ID=Emails.ID GROUP BY Name; +----------+----------------------------------------------+---------------------------------------------------------------------+ | Name | GROUP_CONCAT(DISTINCT Phones.Number) | GROUP_CONCAT(DISTINCT Emails.Email) | +----------+----------------------------------------------+---------------------------------------------------------------------+ | Steven | 8451384,24887312 | [email protected],[email protected],[email protected] | | John | 5551234,5154324 | [email protected],[email protected] | | George | NULL | [email protected] | | Ashley | 7134584 | [email protected] | | Amy | NULL | NULL | +----------+----------------------------------------------+---------------------------------------------------------------------+

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • javascript to print all tables and individual tables

    - by LiveEn
    I am retrieving values from a database and displaying it a table using php. Each table is stored inside a div tag. <div id="print"> table content 1 </div> <div id="print"> table content 2 </div> .................. Can some one please suggest a javascript where i can get a separate link/ button that will print all the tables and a link on all table to print each individual table. I used several javascripts and jquery plug ins but couldn't get my job done. any help will be appreciated

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >