Search Results

Search found 1804 results on 73 pages for 'koenig lookup'.

Page 3/73 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • NHibernate / Fluent - Mapping multiple objects to single lookup table

    - by Al
    Hi all I am struggling a little in getting my mapping right. What I have is a single self joined table of look up values of certain types. Each lookup can have a parent, which can be of a different type. For simplicities sake lets take the Country and State example. So the lookup table would look like this: Lookups Id Key Value LookupType ParentId - self joining to Id base class public class Lookup : BaseEntity { public Lookup() {} public Lookup(string key, string value) { Key = key; Value = value; } public virtual Lookup Parent { get; set; } [DomainSignature] [NotNullNotEmpty] public virtual LookupType LookupType { get; set; } [NotNullNotEmpty] public virtual string Key { get; set; } [NotNullNotEmpty] public virtual string Value { get; set; } } The lookup map public class LookupMap : IAutoMappingOverride<DBLookup> { public void Override(AutoMapping<Lookup> map) { map.Table("Lookups"); map.References(x => x.Parent, "ParentId").ForeignKey("Id"); map.DiscriminateSubClassesOnColumn<string>("LookupType").CustomType(typeof(LookupType)); } } BASE SubClass map for subclasses public class BaseLookupMap : SubclassMap where T : DBLookup { protected BaseLookupMap() { } protected BaseLookupMap(LookupType lookupType) { DiscriminatorValue(lookupType); Table("Lookups"); } } Example subclass map public class StateMap : BaseLookupMap<State> { protected StateMap() : base(LookupType.State) { } } Now I've almost got my mappings set, however the mapping is still expecting a table-per-class setup, so is expecting a 'State' table to exist with a reference to the states Id in the Lookup table. I hope this makes sense. This doesn't seem like an uncommon approach when wanting to keep lookup-type values configurable. Thanks in advance. Al

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • How to create a lookup column that targets a Doc Lib and uses the 'Name' of the document?

    - by stlawrence
    How do you create a lookup column to a Document Library that uses the 'Name' of the document as the lookup value? I found a blog post that recommends adding another custom field like "FileName" and then using a item reciever to populate the custom field with the value from the Name field but that seems cheesy. Link to the blog in case people are interested: http://blogs.msdn.com/pranab/archive/2008/01/08/sharepoint-2007-moss-wss-issue-with-lookup-column-to-doc-lib-name-field.aspx I've got a bunch of custom document content types that I dont want to clutter with a work around that should really work anyway.

    Read the article

  • Optimizing hash lookup & memory performance in Go

    - by Moishe
    As an exercise, I'm implementing HashLife in Go. In brief, HashLife works by memoizing nodes in a quadtree so that once a given node's value in the future has been calculated, it can just be looked up instead of being re-calculated. So eg. if you have a node at the 8x8 level, you remember it by its four children (each at the 2x2 level). So next time you see an 8x8 node, when you calculate the next generation, you first check if you've already seen a node with those same four children. This is extended up through all levels of the quadtree, which gives you some pretty amazing optimizations if eg. you're 10 levels above the leaves. Unsurprisingly, it looks like the perfmance crux of this is the lookup of nodes by child-node values. Currently I have a hashmap of {&upper_left_node,&upper_right_node,&lower_left_node,&lower_right_node} -> node So my lookup function is this: func FindNode(ul, ur, ll, lr *Node) *Node { var node *Node var ok bool nc := NodeChildren{ul, ur, ll, lr} node, ok = NodeMap[nc] if ok { return node } node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil} NodeMap[nc] = node return node } What I'm trying to figure out is if the "nc := NodeChildren..." line causes a memory allocation each time the function is called. If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this? Any advice/feedback would be welcome. (even coding style nits; this is literally the first thing I've written in Go so I'd love any feedback)

    Read the article

  • mongod fails to start with error: mongod: symbol lookup error: mongod: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE

    - by Francesco
    I am trying to start mongod but I get $ mongod mongod --help for help and startup options mongod: symbol lookup error: mongod: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE Searching on google it seems to be related with libpcre; I tried to install last versions of libpcre3 and libpcre++ but it doesn't work. MongoDB shell's version (and mongodb-server's version) is 2.0.4. Ubuntu's version is 12.04. libpcre3's version is 8.12-4. libpcre++0's version is 0.9.5-5.1. Thanks

    Read the article

  • Showplan Operator of the Week – BookMark/Key Lookup

    Fabiano continues in his mission to describe the major Showplan Operators used by SQL Server's Query Optimiser. This week he meets a star, the Key Lookup, a stalwart performer, but most famous for its role in ill-performing queries where an index does not 'cover' the data required to execute the query. If you understand why, and in what circumstances, key lookups are slow, it helps greatly with optimising query performance.

    Read the article

  • TopComponent, Node, Lookup, Palette, and Visual Library

    - by Geertjan
    Here's a small example that puts together several pieces in the context of a NetBeans Platform application, i.e., TopComponent, Node, Lookup, Palette, and Visual Library: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/CensusDesigner The result is a drag-and-drop user interface, i.e., drag items from the palette and drop them onto the window, that's all it does, nothing too fancy, just puts the basic NetBeans Platform pieces together in a pretty standard combination:

    Read the article

  • Showplan Operator of the Week – BookMark/Key Lookup

    Fabiano continues in his mission to describe the major Showplan Operators used by SQL Server's Query Optimiser. This week he meets a star, the Key Lookup, a stalwart performer, but most famous for its role in ill-performing queries where an index does not 'cover' the data required to execute the query. If you understand why, and in what circumstances, key lookups are slow, it helps greatly with optimising query performance.

    Read the article

  • How to build up a lookup table in microcontroller?

    - by GiGi
    Hi, I am so confused about how to build up a 3-dimension lookup table. Is the template as follows: create a 3 dimensional array to store data. Then create linked list. Then create function 'insert' to put all the data into the array? As some book said, linked list should be static const, is it need to create another function to expand the list? Because the lookup table should be used in a microcontroller, it only needs to finish the operation of putting the data into the array and whenever want to find the data, it will be fast and easy to search. Could you help me with that and give me some suggestions? Thank you.

    Read the article

  • How to JNDI lookup from cluster 1 : a queue that exists in cluster 2 in Websphere 6?

    - by user347394
    I have a Websphere topology wherein in Cluster1, I have an MDB that's trying to publish to another MDB that resides in Cluster2. Since they're both in the same container, I tried simply Blockquote Context ctx = new InitialContext(); ctx.lookup("jms/MyQueue"); Blockquote The "jms/MyQueue" is configured in Cluster2. As you can see, this doesn't work!! 1) Do I have to provide an environment entry while creating the InitialContext? Even though both clusters are part of the same container? 2) If not, how then can I lookup the said queue in Cluster 2?

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Setup Convergence Address Book DisplayName Lookup

    - by user13332755
    At Convergence Address Book the default lookup for 'Display Name' is the LDAP attribute 'cn', which leads into confusion if you start to setup 'displayName' LDAP attributes for your users. LDAP User Entry with diaplyName attribute: This behavior can be controlled by the configuration file xlate-inetorgperson.xml. Change the default value of XPATH abperson\entry\displayname from 'cn' to 'displayName'. <convergence_deploy_location>/config/templates/ab/corp-dir/xlate-inetorgperson.xml Note: In case the user has no 'displayName' attribute in LDAP you might noticed '...' at the user entry, if found via 'mail' attribute.

    Read the article

  • Fast lookup for organization hierarchy

    - by Élodie Petit
    I need a way to implement a fast lookup algorithm / system to find users very quickly in a multi-level department and multi-level employee/manager relation organization structure. Departments can have any level of departments and users directly connected to departments. User are connected to departments and other users at the same time. What would be the best approach to implement such a system? There will be approximately 2000 users and 30 departments. Is there a good way to hold all of this information on memory?

    Read the article

  • Lookup Field as a Site Column via CAML

    - by Rob Windsor
    I'm trying to create a Lookup Field as a Site Column via CAML. The list I want to use as the source of the lookup is created in the Feature Receiver so I don't know it's ID. I've read several blog posts that indicate that I can just put the path to the list in the List attribute. It seems from the comments on these post that this solution works for some people but not for others. I'm in the latter group. When I try to associate a content type that uses the lookup site column I: "Exception from HRESULT: 0x80040E07" <Field ID="{da94e56b-428f-4b95-b4c6-24aed0256475}" Name="Test_x0020_Lookup_x0020_Column" StaticName="Test_x0020_Lookup_x0020_Column" DisplayName="Test Lookup Column" Type="Lookup" Required="FALSE" List="Lists/Test" ShowField="Title" PrependId="TRUE" Group="Test Site Columns" /> <ContentType ID="0x0100B6D92594DDCE8E479D0EB0C414C463B0" Name="Test Lookup Content Type" Version="0" Group="Test Content Types"> <FieldRefs> <FieldRef ID="{da94e56b-428f-4b95-b4c6-24aed0256475}" Name="Test_x0020_Lookup_x0020_Column" Required="TRUE" /> </FieldRefs> </ContentType>

    Read the article

  • OpenGL - Cascaded shadow mapping - Texture lookup

    - by Silverlan
    I'm trying to implement cascaded shadow mapping in my engine, but I'm somewhat stuck at the last step. For testing purposes I've made sure all cascades encompass my entire scene. The result is currently this: The different intensity of the cascades is not on purpose, it's actually the problem. This is how I do the texture lookup for the shadow maps inside the fragment shader: layout(std140) uniform CSM { vec4 csmFard; // far distances for each cascade mat4 csmVP[4]; // View-Projection Matrix int numCascades; // Number of cascades to use. In this example it's 4. }; uniform sampler2DArrayShadow csmTextureArray; // The 4 shadow maps in vec4 csmPos[4]; // Vertex position in shadow MVP space float GetShadowCoefficient() { int index = numCascades -1; vec4 shadowCoord; for(int i=0;i<numCascades;i++) { if(gl_FragCoord.z < csmFard[i]) { shadowCoord = csmPos[i]; index = i; break; } } shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; return shadow2DArray(csmTextureArray,shadowCoord).x; } I then use the return value and simply multiply it with the diffuse color. That explains the different intensity of the cascades, since I'm grabbing the depth value directly from the texture. I've tried to do a depth comparison instead, but with limited success: [...] // Same code as above shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; float z = shadow2DArray(csmTextureArray,shadowCoord).x; if(z < shadowCoord.w) return 0.25f; return 1.f; } While this does give me the same shadow value everywhere, it only works for the first cascade, all others are blank: (I colored the cascades because otherwise the transitions wouldn't be visible in this case) What am I missing here?

    Read the article

  • postfix: Temporary lookup failure for FQDN

    - by Thufir
    I'm using the FQDN of dur.bounceme.net which I want to resolve(?) to localhost. That is, I want mail to [email protected] to get delivered to user@localhost. I've tried following the Ubuntu guide on this and seem to be going in circles a bit. root@dur:~# root@dur:~# postfix stop postfix/postfix-script: stopping the Postfix mail system root@dur:~# postfix start postfix/postfix-script: starting the Postfix mail system root@dur:~# telnet dur.bounceme.net 25 Trying 127.0.1.1... telnet: Unable to connect to remote host: Connection refused root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# grep telnet /var/log/mail.log Aug 28 00:24:45 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> Aug 28 00:24:58 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:54:55 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:55:08 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~#

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • SQLAuthority News – Learning Trip – Traveling to Learn SQL Server

    - by pinaldave
    I am currently traveling to Delhi to learn SQL Server in person from my friend. You can read more details about why am I learning SQL Server.  I have signed up for the course End to End SQL Server Business Intelligence at Koenig Solutions. Yesterday I blogged about my registration experience and today I am going to write about my  experience once I arrived at Delhi. From Ahmedabad to Delhi I stay with my wife and daughter in Bangalore (IT Hub of India), my hometown is Ahmedabad. My parents stay in city nearby Ahmedabad. I decided to spend few days with my folks before I sign up for 3 days of solid learning. I had selected an early morning flight to Delhi. I landed at 8:30 AM in Delhi. As soon as I checked email in my mobile I was really glad that I had received details of my pick up vehicle from Koenig. I walked out of the airport and I noticed that a driver was waiting with a placard with my name and photo associated with it. He was in Koenig uniform so there was no chance to make mistakes. In minutes of landing in Delhi I was in my transport heading to the Koenig Training Center. After the quick introduction driver handed me a bag (to be precise Eco friendly bag). The bag contained following items: My registration form All necessary documents in print which I had received earlier A Printed Book of the course next day INR 1000 (What?) I was glad to receive the bag but I was very confused with the Rs 1000. I decided to figure this out once I reach to the training center. Arriving at Koenig Inn Deluxe Koenig registration fees include all the stay and meals. I had opted for Koenig Inn Deluxe as my stay as it was recommended by my friend as well it was the right economical choice for me. When I reached to my accommodation, they were well aware of my arrival and was immediately led to my spacious room. The room is well equipped with all the amenities (hot water, air condition, coffee table, munching snacks,  and free internet) and the staff is very friendly. I immediately got ready as I had to go to Koenig Training Center to meet Center Head for a quick introduction. Koenig Inn Delux Koenig Training Center The training center is within five minutes of distance from the accommodation. I was lead to center head right away and had a very meaningful conversation with Ms Hema regarding my learning goals. She gave me a quick tour of the training center. I was amazed with the numbers of lab rooms they have in the center. The labs are spacious and give the most needed hand’s on experience to the users. I was led to the lab where I was suppose to learn my class the very next day as well I was provided my trainer’s profile. Mystery of Rs 1000 Well, after all this I have still not forgotten why I was provided Rs 1000 when arrived at the airport. When I asked about that I was told that because many students comes from foreign places and they may not have Indian Currency when they land at airport. This was for their immediate consumption till they arrive at the training center. Later on they can get their currency converted to local currency at Koenig Travel Desk. My curiosity was satisfied but I had not expected this answer. I am amazed at the attention to the details. Koenig Travel Desk When I heard about Koenig Travel Desk, I remembered that I have few friends in Delhi and Gurgaon. I had completed all of the formalities so I had reset of the day on my hand. I requested the travel desk if they can arrange a day cab for me so I can visit my friends in Guragon. Within 10 minutes I was on my way to Gurgaon. Telerik India Office Visit What did I do in Guragaon? I met my friends Abhishek Kant, Dhananjay Kumar and Amit Chowdhary. I visited Telerik India office and we had an excellent conversation on various aspects of technology and community. The Telerik India office is very spacious and Abhishek Kant (Telerik India Country Manager) gave us a quick tour of the office. We had an excellent lunch and dinner. One thing is for sure – the day was well spent. Pinal Dave, Dhananjay Kumar and Abhishek Kant Later evening I returned to my accommodation and decided to read up a few of the topics which I was going to learn next day. In tomorrow’s blog post I will discuss about my learning experience. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Remote EJB lookup issue with WebSphere 6.1

    - by marc dauncey
    I've seen this question asked before, but I've tried various solutions proposed, to no avail. Essentially, I have two EJB enterprise applications, that need to communicate with one another. The first is a web application, the second is a search server - they are located on different development servers, not in the same node, cell, or JVM, although they are on the same physical box. I'm doing the JNDI lookup via IIOP, and the URL I am using is as follows: iiop://searchserver:2819 In my hosts file, I've set searchserver to 127.0.0.1. The ports for my search server are bound to this hostname too. However, when the web app (that uses Spring btw) attempts to lookup the search EJB, it fails with the following error. This is driving me nuts, surely this kind of comms between the servers should be fairly simple to get working. I've checked the ports and they are correct. I note that the exception says the initial context is H00723Node03Cell/nodes/H00723Node03/servers/server1, name: ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome. This is the web apps server NOT the search server. Is this correct? How can I get Spring to use the right context? [08/06/10 17:14:28:655 BST] 00000028 SystemErr R org.springframework.remoting.RemoteLookupFailureException: Failed to locate remote EJB [ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome]; nested exception is javax.naming.NameNotFoundException: Context: H00723Node03Cell/nodes/H00723Node03/servers/server1, name: ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome: First component in name hmvsearch/HMVSearchHome not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0] at org.springframework.ejb.access.SimpleRemoteSlsbInvokerInterceptor.doInvoke(SimpleRemoteSlsbInvokerInterceptor.java:101) at org.springframework.ejb.access.AbstractRemoteSlsbInvokerInterceptor.invoke(AbstractRemoteSlsbInvokerInterceptor.java:140) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy7.doSearchByProductKeywordsForKiosk(Unknown Source) at com.hmv.web.usecases.search.SearchUC.execute(SearchUC.java:128) at com.hmv.web.actions.search.SearchAction.executeAction(SearchAction.java:129) at com.hmv.web.actions.search.KioskSearchAction.executeAction(KioskSearchAction.java:37) at com.hmv.web.actions.HMVAbstractAction.execute(HMVAbstractAction.java:123) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at com.hmv.web.controller.HMVActionServlet.process(HMVActionServlet.java:149) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1282) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1239) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:136) at com.hmv.web.support.SessionFilter.doFilter(SessionFilter.java:137) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:142) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:121) at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:82) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:670) at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:2933) at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:221) at com.ibm.ws.webcontainer.VirtualHost.handleRequest(VirtualHost.java:210) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1912) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:84) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:472) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:411) at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:101) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1462) Caused by: javax.naming.NameNotFoundException: Context: H00723Node03Cell/nodes/H00723Node03/servers/server1, name: ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome: First component in name hmvsearch/HMVSearchHome not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0] at com.ibm.ws.naming.jndicos.CNContextImpl.processNotFoundException(CNContextImpl.java:4392) at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1752) at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1707) at com.ibm.ws.naming.jndicos.CNContextImpl.lookupExt(CNContextImpl.java:1412) at com.ibm.ws.naming.jndicos.CNContextImpl.lookup(CNContextImpl.java:1290) at com.ibm.ws.naming.util.WsnInitCtx.lookup(WsnInitCtx.java:145) at javax.naming.InitialContext.lookup(InitialContext.java:361) at org.springframework.jndi.JndiTemplate$1.doInContext(JndiTemplate.java:132) at org.springframework.jndi.JndiTemplate.execute(JndiTemplate.java:88) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:130) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:155) at org.springframework.jndi.JndiLocatorSupport.lookup(JndiLocatorSupport.java:95) at org.springframework.jndi.JndiObjectLocator.lookup(JndiObjectLocator.java:105) at org.springframework.ejb.access.AbstractRemoteSlsbInvokerInterceptor.lookup(AbstractRemoteSlsbInvokerInterceptor.java:98) at org.springframework.ejb.access.AbstractSlsbInvokerInterceptor.getHome(AbstractSlsbInvokerInterceptor.java:143) at org.springframework.ejb.access.AbstractSlsbInvokerInterceptor.create(AbstractSlsbInvokerInterceptor.java:172) at org.springframework.ejb.access.AbstractRemoteSlsbInvokerInterceptor.newSessionBeanInstance(AbstractRemoteSlsbInvokerInterceptor.java:226) at org.springframework.ejb.access.SimpleRemoteSlsbInvokerInterceptor.getSessionBeanInstance(SimpleRemoteSlsbInvokerInterceptor.java:141) at org.springframework.ejb.access.SimpleRemoteSlsbInvokerInterceptor.doInvoke(SimpleRemoteSlsbInvokerInterceptor.java:97) ... 36 more Many thanks for any assistance! Marc

    Read the article

  • .net dictionary and lookup add / update

    - by freddy smith
    I am sick of doing blocks of code like this for various bits of code I have: if (dict.ContainsKey[key]) { dict[key] = value; } else { dict.Add(key,value); } and for lookups (i.e. key - list of value) if (lookup.ContainsKey[key]) { lookup[key].Add(value); } else { lookup.Add(new List<valuetype>); lookup[key].Add(value); } Is there another collections lib or extension method I should use to do this in one line of code no matter what the key and value types are? e.g. dict.AddOrUpdate(key,value) lookup.AddOrUpdate(key,value)

    Read the article

  • C# dictionary and lookup add / update

    - by freddy smith
    I am sick of doing blocks of code like this for various bits of code I have: if (dict.ContainsKey[key]) { dict[key] = value; } else { dict.Add(key,value); } and for lookups (i.e. key - list of value) if (lookup.ContainsKey[key]) { lookup[key].Add(value); } else { lookup.Add(new List); lookup[key].Add(value); } Is there another collections lib or extension method I should use to do this in one line of code no matter what the key and value types are? e.g. dict.AddOrUpdate(key,value) lookup.AddOrUpdate(key,value)

    Read the article

  • Using Excel Lookup Function and Handling Case Where No Matches Exist

    - by Dave
    I'm using Excel to enter data for an event where people register. A high percentage of the registrants will have registered for previous events, so we can their name and ID number. I'm trying to use the LOOKUP function in Excel to lookup the name and then populate the ID field with their ID number. This works well unless the value that is looked up is a new user that we don't already have details for. However, if the LOOKUP function can not find an exact match, it chooses the largest value in the lookup_range that is less than or equal to the value. This causes a problem since you can't tell if the match was exact (and the data is correct) or not exact and the match is incorrect. How do I catch non-matches and handle separately?

    Read the article

  • How to lookup a value in a table with multiple criteria

    - by php-b-grader
    I have a data sheet with multiple values in multiple columns. I have a qty and a current price which when multiplied out gives me the current revenue (CurRev). I want to use this lookup table to give me the new revenue (NewRev) from the new price but can't figure out how to do multiple ifs in a lookup. What I want is to build a new column that checks the "Product", "Tier" and "Location/State" and gives me the new price from the lookup table (above) and then multiply that by the qty. e.g. Data > Product, Tier, Location, Qty, CurRev, NewRev > Product1, Tier1, VIC, 2, $1000.00, $6000 (2 x $3000) > Product2, Tier3, NSW, 1, $100.00, $200 (1 x $200) > Product1, Tier3, SA, 5, $250.00, $750 (5 x $150) > Product3, Tier1, ACT, 5, $100.00, $500(5 x $100) > Product2, Tier3, QLD, 2, $150.00, $240 (2 x $240) Worst case, if I just get the new rate I can create another column

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >