Search Results

Search found 2093 results on 84 pages for 'logical'.

Page 42/84 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Twitter status id conundrum

    - by jamiet
    I have an interest, a slightly perverse one some might say, in using online services and trying to figure out what the underlying (logical) data model is and in this day and age Twitter is one that lends itself very well to scrutiny. Consider this recent tweet of mine: The URL that enables you to see that tweet is http://twitter.com/jamiet/status/12154647354. We can interpret that URL to mean "a tweet by jamiet with an id of 12154647354" and hence we might further assume that the unique identifier for the tweet is {jamiet,12154647354}. However, its well-known that Twitter gives each status a unique ID regardless of who tweeted it so we might expect we could reach that tweet just by using a URL of http://twitter.com/status/12154647354 however (at the time of writing) that only redirects to Twitter's homepage. That seems strange to me especially given that we can use Twitter's API to access information about that tweet using only the id of the status. Witness http://api.twitter.com/1/statuses/show/12154647354.xml: [We can also access a JSON version of that information using http://api.twitter.com/1/statuses/show/12154647354.json] I'm puzzled as to why a tweet can't be accessed using on the main twitter website using the id alone. Anyone have any suggestions? @jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers

    - by JuergenKress
    Introduction Have you ever experienced the challenge to map both your functional and technical assets in one software package? Finding a software package that is able to describe the metadata about these assets and their mutual relationships? And if you found the correct software package, was it maintainable? The Oracle Enterprise Repository (OER) is a powerful SOA repository. Its core task is to map and visualize the interaction between technical assets generated by the SOA Suite and OSB. However, OER can be configured to not only contain these technical assets, but also to contain functional assets, i.e.: functional designs, use cases and a logical data model. Now that’s interesting! OER is able to show all the assets in your system and, if necessary, zoom in on one of the assets and their mutual relationships (Figure 1). This opens a set of doors to powerful features, e.g.: Impact analsysis If a functional design is adjusted, which other functional designs and use cases do I need to adjust? Traceability If a web service generates an error, in which functional and technical designs is the web service described This sounds great, but how do we get all the functional and technical documents in OER, and how are we going to keep this repository up-to-date? Read the full article. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OER,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQL SERVER – Disk Space Monitoring – Detecting Low Disk Space on Server

    - by Pinal Dave
    A very common question I often receive is how to detect if the disk space is running low on SQL Server. There are two different ways to do the same. I personally prefer method 2 as that is very easy to use and I can use it creatively along with database name. Method 1: EXEC MASTER..xp_fixeddrives GO Above query will return us two columns, drive name and MB free. If we want to use this data in our query, we will have to create a temporary table and insert the data from this stored procedure into the temporary table and use it. Method 2: SELECT DISTINCT dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will give us three columns: drive logical name, drive letter and free space in MB. We can further modify above query to also include database name in the query as well. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO This will give us additional data about which database is placed on which drive. If you see a database name multiple times, it is because your database has multiple files and they are on different drives. You can modify above query one more time to even include the details of actual file location. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, mf.physical_name PhysicalFileLocation, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will now additionally include the physical file location as well. As I mentioned earlier, I prefer method 2 as I can creatively use it as per the business need. Let me know which method are you using in your production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Keep basic game physics separate from basic game object? [on hold]

    - by metamorphosis
    If anybody has dealt with a similar situation I'd be interested in your experience/wisdom, I'm developing a 2D game library in C++, I have game objects which have very basic physics, they also have movement classes attached to differing states, for example, a different movement type based on whether the character is jumping, on ice, whatever. In terms of storing velocity and acceleration impulses, are they best held by the object? Or by the associated movement class? The reason I ask is that I can see advantages to both approaches- if you store physics data in the movement class, you have to pass physics information between class instances when a state change occurs (ie. impulses, gravity etc) but the class has total control over whether those physics are updated or not. An obvious example of how this would be useful was if an object was affected by something which caused it to ignore gravity, or something like that. on the other hand if you store the physics data in the object class, it feels more logical, you don't have to go around passing physics impulses and gravity etc, however the control that the movement class has over the object's physics becomes more convoluted. Basically the difference is between: object->physics stacks (acceleration impulses etc) ->physics functions ->movement type <-movement type makes physics function calls through object and object->movement type->physics stacks ->physics functions ->object forwards external physics calls onto movement type ->object transfers physics stacks between movement types when state change occurs Are there best practices here?

    Read the article

  • SQL SERVER – Cleaning Up SQL Server Indexes – Defragmentation, Fillfactor – Video

    - by pinaldave
    Storing data non-contiguously on disk is known as fragmentation. Before learning to eliminate fragmentation, you should have a clear understanding of the types of fragmentation. When records are stored non-contiguously inside the page, then it is called internal fragmentation. When on disk, the physical storage of pages and extents is not contiguous. We can get both types of fragmentation using the DMV: sys.dm_db_index_physical_stats. Here is the generic advice for reducing the fragmentation. If avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEX REORGANIZE: This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level pages of the index in a logical order. As this is an online operation, the index is available while the statement is running. If avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD: This is replacement for DBCC DBREINDEX to rebuild the index online or offline. In such case, we can also use the drop and re-create index method.(Ref: MSDN) Here is quick video which covers many of the above mentioned topics. While Vinod and I were planning about Indexing course, we had plenty of fun and learning. We often recording few of our statement and just left it aside. Afterwords we thought it will be really funny Here is funny video shot by Vinod and Myself on the same subject: Here is the link to the SQL Server Performance:  Indexing Basics. Here is the additional reading material on the same subject: SQL SERVER – Fragmentation – Detect Fragmentation and Eliminate Fragmentation SQL SERVER – 2005 – Display Fragmentation Information of Data and Indexes of Database Table SQL SERVER – De-fragmentation of Database at Operating System to Improve Performance Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • What is the proper way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?

    - by Denja
    Hi Linux Community, I find my self struggling with the slowness of windows OS once again. It's Time to change with the Ubuntu 10.10 64bit for I like to use a faster Operating System. My Hard Disk laptop has a RECOVERY and HP_TOOLS partition they are both Primary. I Have the System Recovery DVD for Windows 64bit should anything bad happen. Here's the layout I used with windows before: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,ad Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I wanted to make: * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows DATA partition (user files) NTFS - 120GB(Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Extended)(sda3) (Ubuntu 10.10 64bit) * Linux home Ext3 - 90GB (Extended)(sda4) (Ubuntu 10.10 64bit) * Linux swap swap- RAM size, 3GB (sda5) * Linux root Ext3- 18GB (Extended) (sda6) (OpenSuse or Puppy or kubuntu) Here is my New Ubuntu 10.10 64bit layout in use now: * SYSTEM partition NTFS 199MB (Primary) (sda1) * (C:) Windows 7 system partition NTFS - 90GB (Primary) (sda2) * (D:) Windows 7 RECOVERY partition NTFS - 12,90GB (Primary) (sda3) * Linux system partition EXTENDED - 195,1GB (Logical) * Linux root Ext4- 10GB (Extended) (sda4) * Linux swap swap- RAMx2 size, 6,1GB (sda5) * Linux home Ext3- 179GB (Extended) (sda6) When I installed Ubuntu,I didn't know if I could wipe all previous partitions,because of the RECOVERY partition. So I just made the space for my extended partition with GParted by deleting the HP_TOOLS (Fat32). By doing this I managed somehow to install Ubuntu 64 with Success. And I also made the partitions for the swap or a third Linux OS as Jordan suggested. But I couldn't actually make the partitions for the shared NTFS.(no option!) Question 1: What is the proper way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?? Thank you in advance for your advises and suggestions and Happy New Year to All!!

    Read the article

  • SQL SERVER – A Funny Cartoon on Index

    - by pinaldave
    Performance Tuning has been my favorite subject and I have done it for many years now. Today I will list one of the most common conversation about Index I have heard in my life. Every single time, I am at consultation for performance tuning I hear following conversation among various team members. I want to ask you, does this kind of conversation happens in your organization? Any way, If you think Index solves all of your performance problem I think it is not true. There are many other reason one has to consider along with Indexes. For example I consider following various topic one need to understand for performance tuning. ?Logical Query Processing ?Efficient Join Techniques ?Query Tuning Considerations ?Avoiding Common Performance Tuning Issues Statistics and Best Practices ?TempDB Tuning ?Hardware Planning ?Understanding Query Processor ?Using SQL Server 2005 and 2008 Updated Feature Sets ?CPU, Memory, I/O Bottleneck Index Tuning (of course) ?Many more… Well, I have written this blog thinking I will keep this blog post a bit easy and not load up. I will in future discuss about other performance tuning concepts. Let me know what do you think about the cartoon I made. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Humor, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JDeveloper and ADF at UKOUG

    - by Grant Ronald
    This year, Oracle ADF and JDeveloper has a big showing at the UKOUG (about 22 hours worth!!)- Europe's largest Oracle User Group.  There are three days packed with awesome ADF content delivered by some of the leading lights in ADF Developement including Duncan Mills, Frank Nimphius, Shay Shmeltzer, Susan Duncan, Lucas Jellema, Steven Davelaar, Sten Vesterli (and I'll be there as well!). Please make sure you refer to the official agenda for timings but an outline is here (if you think there are any sessions I have missed let me know and I will add them) Monday 10:00 - 10:45 - Deepdive into logical and physical data modeling with JDeveloper 10:00 - 12:15 - Debugging ADF Applications 12:15 - 13:15 - Learn ADF Task Flows in 60 Minutes 14:30 - 15:15 - ADF's Hidden Gem - the Groovy scripting language in Oracle ADF 15:25 - 16:10 - ADF Patterns for Forms Conversions 16:35 - 17:35 - Dummies Guide to Oracle ADF 16:35 - 17:35 - ADF Security Overview - Strategies and Best Practices 17:45 - 18:30 - A Methodology for Enterprise Applications with Oracle ADF Tuesday 09:00 - 10:00 - Real World Performance Tuning for Oracle ADF 11:15 - 12:15 - Keynote: Modern Development, Mobility and Rich Internet Applications 11:15 - 12:15 - Migration to Fusion Middleware 11g: Real world cases of Forms, ADF and Identity Management upgrades 14:40 - 15:20 - What's new in JDeveloper 11gR2 14:40 - 15:20 - Development Tools Roundtable 15:35 - 16:20 - ALM in Jdeveloper is exciting! 16:40 - 17:40 - Moving Oracle Forms to Oracle ADF: Case Studies Wednesday 09:00 - 10:00 - Building a Multi-Tasking ADF Application with Dynamic Regions and Dynamic Tabs 10:10 - 10:55 - Building Highly Reusable ADF Taskflows 12:30 - 13:30 - Design Patterns, Customization and Extensibility of Fusion Applications 14:25 - 15:10 - Continuous Integration with Hudson: What a year! 14:00 - 17:00 - Wednesday Wizardry with Fusion Middleware - Live application development demonstration with ADF, SOA Suite 15:20 - 16:05 - Adding Mobile and Web 2.0 UIs to Existing Applications - The Fusion Way  16:15 - 17:00 - Leveraging ADF for Building Complex Custom Applications

    Read the article

  • Windows Azure Upgrade Domain

    - by kaleidoscope
    Windows Azure automatically divides your role instances into some “logical” domains called upgrade domains. During upgrade, Azure is updating these domains one by one. This is a by design behavior to avoid nasty situations. Some of the last feature additions and enhancements on the platform was the ability to notify your role instances in case of “environment” changes, like adding or removing being most common. In such case, all your roles get a notification of this change. Imagine if you had 50 or 60 role instances, getting notified all at once and start doing various actions to react to this change. It will be a complete disaster for your service. The way to address this problem is upgrade domains. During upgrade Windows Azure updates them one by one and only the associated role instances to a specific domain get notified of the changes taking place. Only a small number of your role instances will get notified, react and the rest will remain intact providing a seamless upgrade experience and no service disruption or downtime. http://www.kefalidis.me/archive/2009/11/27/windows-azure-ndash-what-is-an-upgrade-domain.aspx   Lokesh, M

    Read the article

  • Enum.HasFlag

    - by Scott Dorman
    An enumerated type, also called an enumeration (or just an enum for short), is simply a way to create a numeric type restricted to a predetermined set of valid values with meaningful names for those values. While most enumerations represent discrete values, or well-known combinations of those values, sometimes you want to combine values in an arbitrary fashion. These enumerations are known as flags enumerations because the values represent flags which can be set or unset. To combine multiple enumeration values, you use the logical OR operator. For example, consider the following: public enum FileAccess { None = 0, Read = 1, Write = 2, }   class Program { static void Main(string[] args) { FileAccess access = FileAccess.Read | FileAccess.Write; Console.WriteLine(access); } } The output of this simple console application is: The value 3 is the numeric value associated with the combination of FileAccess.Read and FileAccess.Write. Clearly, this isn’t the best representation. What you really want is for the output to look like: To achieve this result, you simply add the Flags attribute to the enumeration. The Flags attribute changes how the string representation of the enumeration value is displayed when using the ToString() method. Although the .NET Framework does not require it, enumerations that will be used to represent flags should be decorated with the Flags attribute since it provides a clear indication of intent. One “problem” with Flags enumerations is determining when a particular flag is set. The code to do this isn’t particularly difficult, but unless you use it regularly it can be easy to forget. To test if the access variable has the FileAccess.Read flag set, you would use the following code: (access & FileAccess.Read) == FileAccess.Read Starting with .NET 4, a HasFlag static method has been added to the Enum class which allows you to easily perform these tests: access.HasFlag(FileAccess.Read) This method follows one of the “themes” for the .NET Framework 4, which is to simplify and reduce the amount of boilerplate code like this you must write. Technorati Tags: .NET,C# 4

    Read the article

  • How do you use blank lines in your code ?

    - by Matthieu M.
    There has been a few remarks about white space already in discussion about curly braces placements. I myself tend to sprinkle my code with blank lines in an attempt to segregate things that go together in "logical" groups and hopefully make it easier for the next person to come by to read the code I just produced. In fact, I would say I structure my code like I write: I make paragraphs, no longer than a few lines (definitely shorter than 10), and try to make each paragraph self-contained. For example: in a class, I will group methods that go together, while separating them by a blank line from the next group. if I need to write a comment I'll usually put a blank line before the comment in a method, I make one paragraph per step of the process All in all, I rarely have more than 4/5 lines clustered together, meaning a very sparse code. I don't consider all this white space a waste because I actually use it to structure the code (as I use the indentation in fact), and therefore I feel it worth the screen estate it takes. For example: for (int i = 0; i < 10; ++i) { if (i % 3 == 0) continue; array[i] += 2; } I consider than the two statements have clear distinct purposes and thus deserve to be separated to make it obvious. So, how do you actually use (or not) blank lines in code ?

    Read the article

  • Wireless with WEP extremely slow on an Acer Timeline 4810T with a Centrino Wireless-N 1000

    - by noq38
    I've upgraded an Acer Timeline 4810T to Ubuntu 11.10. Everything works fine except for the darn wireless interface (network manager). I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. This was no problem in 11.04 and prior. Is there a simple solution for this? Any suggestions? Here's more hardware information. Hopefully this helps to debug the network issue: sudo lshw -class network *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:3c:5e:e0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-13-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:d2400000-d2401fff lspci 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no Many thanks for your help! I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. Any suggestions?

    Read the article

  • BUILD 2013 Session&ndash;Testing Your C# Base Windows Store Apps

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionndashtesting-your-c-base-windows-store-apps.aspx Testing an application is not what most people consider fun and the number of situation that need to be tested seems to grow exponentially when building mobile apps.  That is why I found the topic of this session interesting.  When I found out that the speaker, Francis Cheung, was from the Patterns and Practices group I knew I was in the right place.  I have admired that team since I first met Ron Jacobs around 2001.  So what did Francis have to offer? He started off in a rather confusing who’s on first fashion.  It seems that one of his tester was originally supposed to give the talk, but then it was decided that it would be better to have someone who does development present a testing topic.  This didn’t hinder the content of the talk in the least.  He broke the process down in a logical manner that would be straight forward to understand if not implement. Francis hit the main areas we usually think of such as tombstoning, network connectivity and asynchronous code, but he approached them with tools they we may not have thought of until now.  He relied heavily on Fiddler to intercept and change the behavior of network requests. Then there are the areas you might not normal think to check.  This includes localization, accessibility and updating client code to a new version.  These are important aspects of your app that can severely impact how customers feel about your app.  Take the time to view this session and get a new appreciation for testing and where it fits in your development lifecycle. del.icio.us Tags: BUILD 2013,Testing,C#,Windows Store Apps,Fiddler

    Read the article

  • "wrong fs type, bad option, bad superblock" error while mounting FAT Drives

    - by cshubhamrao
    I am unable to mount any fat32 or fat16 formatted usb disks under Ubuntu 13.10. The thing here to note is that it is happening only with fat formatted Disks. ntfs, ext formatted external usb disks work well (I tried formatting the same with ext4 and it worked) While mounting via nautilus: Error while mounting from terminal: root@shubham-pc:~# mount -t vfat /dev/sdc1 /media/shubham/n mount: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so As suggested by the error: Output from dmesg | tail root@shubham-pc:~# dmesg | tail [ 3545.482598] scsi8 : usb-storage 1-1:1.0 [ 3546.481530] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.26 PQ: 0 ANSI: 5 [ 3546.482373] sd 8:0:0:0: Attached scsi generic sg3 type 0 [ 3546.483758] sd 8:0:0:0: [sdc] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [ 3546.485254] sd 8:0:0:0: [sdc] Write Protect is off [ 3546.485262] sd 8:0:0:0: [sdc] Mode Sense: 43 00 00 00 [ 3546.488314] sd 8:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 3546.499820] sdc: sdc1 [ 3546.503388] sd 8:0:0:0: [sdc] Attached SCSI removable disk [ 3547.273396] FAT-fs (sdc1): IO charset iso8859-1 not found Output from fsck.vfat: root@shubham-pc:~# fsck.vfat /dev/sdc1 dosfsck 3.0.16, 01 Mar 2013, FAT32, LFN /dev/sdc1: 1 files, 1/1949978 clusters All normal Tried re-creating the whole partition table and then formatting as fat32 but to no avail so the possibility of corrupted drive is ruled out. Tried the same with around 4 Disks or so and all have the same things

    Read the article

  • Atheros wireless not working

    - by Chandru1
    I have been struggling hard since i have installed Ubuntu 10.10 but it has been difficult for me to get my wifi working. So here is what i tried. First i checked whether i have the driver using the ifconfig command and it shows the wireless lan driver as wlan0. Next, i tried the command iwlist wlan0 scanning by becoming the root which gave me the output as no scan results. Next, i visited this link https://help.ubuntu.com/community/WifiDocs/Driver/Atheros to see as to what problem my laptop may have. I do own have an ath5k chipset. And as i followed the instructions in the above link in one of the blacklist-ath_pci.conf file had this written in it. For some Atheros 5K RF MACs, the madwifi driver loads buts fails to correctly initialize the hardware, leaving it in a state from which ath5k cannot recover. To prevent this condition, stop madwifi from loading by default. Use Jockey to select one driver or the other. (Ubuntu: #315056, #323830 I am not that good at Linux but i have given it a try. I am desperate to have my wifi working and i would be glad if this community could help. ADDED: If anyone would like to know as to what drivers i am using this is the output. network description: Wireless interface product: AR2413 802.11bg NIC vendor: Atheros Communications Inc. physical id: 3 bus info: pci@0000:0a:03.0 logical name: wlan0 version: 01 serial: 00:19:7d:d3:0c:fd width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath5k driverversion=2.6.35-24-generic firmware=N/A latency=168 link=no maxlatency=28 mingnt=10 multicast=yes wireless=IEEE 802.11bg resources: irq:18 memory:d0000000-d000ffff Some more information and output as to what i have done. lsmod | grep ath ath5k 130083 0 mac80211 231541 1 ath5k ath 8153 1 ath5k cfg80211 144470 3 ath5k,mac80211,ath led_class 2633 1 ath5k

    Read the article

  • WPA2 authentication fails on Ubuntu 12.04 using Rosewill RNX-N1

    - by user94156
    Decided to reduce the clutter in the house and replace a wired connection with a wireless one on my wife's system using USB network device Rosewill RNX-X1. I can see and connect to unprotected network, but WPA2 authentication repeatedly fails. RNX-X1 works on other systems (including TV); also have 2 of 'em and tried each. Worth noting that I recently switched from Comcast to CenturyLink and so switched routers. The system connected successfully to previous router (Linksys EA4500) using WPA2. Would think it is the router (Actiontec C1000A) but all other devices (TV, iPad, Windows, Blackberry, and Squeezebox) connect ok. Would appreciate some diagnostic guidance and insight (phrased for a newbie!) Tests to date: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 00:e0:4d:30:40:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:ac00(size=256) memory:fdcff000-fdcfffff memory:fdb00000-fdb1ffff *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan1 serial: 00:02:6f:bd:30:a0 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.2.0-31-generic firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn sudo lspci -v 00:00.0 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 Capabilities: [44] HyperTransport: Slave or Primary Interface Capabilities: [dc] HyperTransport: MSI Mapping Enable+ Fixed- 00:01.0 ISA bridge: NVIDIA Corporation MCP67 ISA Bridge (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 00:01.1 SMBus: NVIDIA Corporation MCP67 SMBus (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: 66MHz, fast devsel, IRQ 11 I/O ports at fc00 [size=64] I/O ports at 1c00 [size=64] I/O ports at 1c40 [size=64] Capabilities: [44] Power Management version 2 Kernel driver in use: nForce2_smbus Kernel modules: i2c-nforce2 00:01.2 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Flags: 66MHz, fast devsel 00:02.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at fe02f000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:02.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe02e000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fe02d000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:04.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 20 Memory at fe02c000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:06.0 IDE interface: NVIDIA Corporation MCP67 IDE Controller (rev a1) (prog-if 8a [Master SecP PriP]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1] [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1] I/O ports at f000 [size=16] Capabilities: [44] Power Management version 2 Kernel driver in use: pata_amd Kernel modules: pata_amd 00:07.0 Audio device: NVIDIA Corporation MCP67 High Definition Audio (rev a1) Subsystem: Biostar Microtech Int'l Corp Device 820c Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe024000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [6c] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:08.0 PCI bridge: NVIDIA Corporation MCP67 PCI Bridge (rev a2) (prog-if 01 [Subtractive decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fdf00000-fdffffff Prefetchable memory behind bridge: fd000000-fd0fffff Capabilities: [b8] Subsystem: NVIDIA Corporation Device cb84 Capabilities: [8c] HyperTransport: MSI Mapping Enable- Fixed- 00:09.0 IDE interface: NVIDIA Corporation MCP67 AHCI Controller (rev a2) (prog-if 85 [Master SecO PriO]) Subsystem: Biostar Microtech Int'l Corp Device 5407 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 I/O ports at 09f0 [size=8] I/O ports at 0bf0 [size=4] I/O ports at 0970 [size=8] I/O ports at 0b70 [size=4] I/O ports at dc00 [size=16] Memory at fe02a000 (32-bit, non-prefetchable) [size=8K] Capabilities: [44] Power Management version 2 Capabilities: [8c] SATA HBA v1.0 Capabilities: [b0] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [cc] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: ahci 00:0b.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fddfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fdc00000-fdcfffff Prefetchable memory behind bridge: 00000000fdb00000-00000000fdbfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0d.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 00009000-00009fff Memory behind bridge: fda00000-fdafffff Prefetchable memory behind bridge: 00000000fd900000-00000000fd9fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0e.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 00008000-00008fff Memory behind bridge: fd800000-fd8fffff Prefetchable memory behind bridge: 00000000fd700000-00000000fd7fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0f.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 00007000-00007fff Memory behind bridge: fd600000-fd6fffff Prefetchable memory behind bridge: 00000000fd500000-00000000fd5fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:10.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 I/O behind bridge: 00006000-00006fff Memory behind bridge: fd400000-fd4fffff Prefetchable memory behind bridge: 00000000fd300000-00000000fd3fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=08, subordinate=08, sec-latency=0 I/O behind bridge: 00005000-00005fff Memory behind bridge: fd200000-fd2fffff Prefetchable memory behind bridge: 00000000fd100000-00000000fd1fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:12.0 VGA compatible controller: NVIDIA Corporation C68 [GeForce 7050 PV / nForce 630a] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Biostar Microtech Int'l Corp Device 1406 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fb000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at fc000000 (64-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at 80000000 [disabled] [size=128K] Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map Flags: fast devsel 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller Flags: fast devsel 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k8temp Kernel modules: k8temp 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: Biostar Microtech Int'l Corp Device 2305 Flags: bus master, fast devsel, latency 0, IRQ 47 I/O ports at ac00 [size=256] Memory at fdcff000 (64-bit, non-prefetchable) [size=4K] [virtual] Expansion ROM at fdb00000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Capabilities: [48] Vital Product Data Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100] Advanced Error Reporting Capabilities: [12c] Virtual Channel Capabilities: [148] Device Serial Number 32-00-00-00-10-ec-81-68 Capabilities: [154] Power Budgeting <?> Kernel driver in use: r8169 Kernel modules: r8169 sudo rfkill list all 2: phy2: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • AceCypher is an Addictive Cypher Slide Puzzle Game

    - by Akemi Iwaya
    Are you ready for a game that will test your logical thinking skills while providing hours of fun? Then you may want to have a look at this awesome cypher slide puzzler! AceCypher is great puzzle game for those times when you only have a few minutes to play or want a fun way to pass the time while relaxing. The overall premise and style of game play for AceCypher is simple. You move individual rows (left, right) or columns (up, down) one space at a time in order to shift the positions of numbers on the game board through ’round-a-bout’ trading. The goal is to make the four numbers in the red square match the code shown in the upper right corner (including positions). Sounds simple so far, right? But the challenge comes from the random boards you will be given to work with…some will not be too hard to solve while others will tax your brain (and patience!) quite well.     

    Read the article

  • DopeWarz 2010

    - by Theo Moore
    A few (6?) weeks ago, I started a project that I've always wanted to do. I am doing a re-write of the old VB6 game DopeWars with a partner in crime. I loved that game and spent many, many hours wasting time playing it years ago. I liked it so much, I even registered it (it was $5...even then, that wasn't much to spend). The VB6 version itself was a port of the old DOS game DrugWars (never go to play that). I needed a game project to work on as it's been far too long since I've done any game work. There is surprising amount of logic built into the game (there is the way I'm writing it, anyway) with what is really a minimal interface. My design goal was to have an object model that could be easily adapted to any interface and so far, I've managed to do that. I am even considering a web-based version that could run via Facebook (no clue how to do that...yet). I've enlisted the help of one of my DBA buddies to work on the interface while I am working on the object model. So far, this arrangement has gone well. The logical separation of concerns allows for us to collaborate easily. Once we get to the Facebook step, it will be great to have a DBA "on staff" to help that part off of the ground. The object model is probably about 60% complete with quite a bit of testing to go. More on this as we go....

    Read the article

  • What is the kd tree intersection logic?

    - by bobobobo
    I'm trying to figure out how to implement a KD tree. On page 322 of "Real time collision detection" by Ericson The text section is included below in case Google book preview doesn't let you see it the time you click the link text section Relevant section: The basic idea behind intersecting a ray or directed line segment with a k-d tree is straightforward. The line is intersected against the node's splitting plane, and the t value of intersection is computed. If t is within the interval of the line, 0 <= t <= tmax, the line straddles the plane and both children of the tree are recursively descended. If not, only the side containing the segment origin is recursively visited. So here's what I have: (open image in new tab if you can't see the lettering) The logical tree Here the orange ray is going thru the 3d scene. The x's represent intersection with a plane. From the LEFT, the ray hits: The front face of the scene's enclosing cube, The (1) splitting plane The (2.2) splitting plane The right side of the scene's enclosing cube But here's what would happen, naively following Ericson's basic description above: Test against splitting plane (1). Ray hits splitting plane (1), so left and right children of splitting plane (1) are included in next test. Test against splitting plane (2.1). Ray actually hits that plane, (way off to the right) so both children are included in next level of tests. (This is counter-intuitive - shouldn't only the bottom node be included in subsequent tests) Can some one describe what happens when the orange ray goes through the scene correctly?

    Read the article

  • no disk space, cant use internet

    - by James
    after trying to install drivers using sudo apt-get update && sudo apt-get upgrade, im faced with a message saying no space left on device, i ran disk usage analyzer as root and there was three folders namely, main volume, home folder, and my 116gb hard drive (which is practically empty) yet both other folders are full, which is stopping me installing drivers because of space, how do i get ubuntu to use this space on my hard drive? its causing problems because i cant gain access to the internet as i cant download drivers when i havnt got enough space, this happens every time i try it sudo fdisk -l Disk /dev/sda: 120.0GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003eeed Device Boot Start End Blocks Id System /dev/sda1 * 2048 231315455 115656704 83 Linux /dev/sda2 231317502 234440703 1561601 5 Extended /dev/sda5 231317504 234440703 1561600 82 Linux swap / solaris Output of df -h df: '/media/ubuntu/FB18-ED76': No such file or directory Filesysytem Size Used Avail Use% Mounted on /cow 751M 751M 0 100% / udev 740M 12K 740M 1% /dev tmpfs 151M 792K 150M 1% /run /dev/sr0 794M 794M 0 100% /cdrom /dev/loop0 758M 758M 0 100% /rofs none 4.0K 0 4.0K 0% /sys/fs/cgroup tmpfs 751M 1.4M 749M 1% /tmp none 5.0M 4.0K 5.0M 1% /run/lock none 751M 276K 751M 1% /run/shm none 100M 40K 100M 1% /run/user

    Read the article

  • Idea of an algorithm to detect a website's navigation structure?

    - by Uwe Keim
    Currently I am in the process of developing an importer of any existing, arbitrary (static) HTML website into the upcoming release of our CMS. While the downloading the files is solved successfully, I'm pulling my hair off when it comes to detect a site structure (pages and subpages) purely from the HTML files, without the user specifying additional hints. Basically I want to get a tree like: + Root page 1 + Child page 1 + Child page 2 + Child child page1 + Child page 3 + Root page 2 + Child page 4 + Root page 3 + ... I.e. I want to be able to detect the menu structure from the links inside the pages. This has not to be 100% accurate, but at least I want to achieve more than just a flat list. I thought of looking at multiple pages to see similar areas and identify these as menu areas and parse the links there, but after all I'm not that satisfied with this idea. My question: Can you imagine any algorithm when it comes to detecting such a structure? Update 1: What I'm looking for is not a web spider, but an algorithm do create a logical tree of the relationship of the pages to be able to create pages and subpages inside my CMS when importing them. Update 2: As of Robert's suggestion I'll solve this by starting at the root page, and then simply parse links as you go and treat every link inside a page simply as a child page. Probably I'll recurse not in a deep-first manner but rather in a breadth-first manner to get a more balanced navigation structure.

    Read the article

  • ArchBeat Link-o-Rama for November 30, 2012

    - by Bob Rhubart
    Oracle SOA Database Adapter Polling in a Cluster: A Handy Logical Delete Pattern | Carlo Arteaga "Using the SOA database adapter usually becomes easier when the adapter is simply viewed and treated as a gateway between the Oracle SOA composite world and the database world," says Carlo Arteaga. "When viewing the adapter in this light one should come to understand that the adapter is not the ultimate all-in-one solution for database access and database logic needs." OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." When Premature Optimization Isn't | Dustin Marx "Perhaps the most common situations in which I have seen developers make bad decisions under the pretense of 'avoiding premature optimization' is making bad architecture or design choices," says Dustin Marx. Protecting Intranet and Extranet Applications with a Single OAM 11g Deployment | Brian Eidelman Oracle Fusion Middleware A-Team member Brian Eideleman's post, part of the Oracle Access Manager Academy series, explores issues and soluions around setting up a single OAM deployment to protect both intranet and extranet apps. Thought for the Day "Never make a technical decision based upon the politics of the situation, and never make a political decision based upon technical issues." — Geoffrey James Source: SoftwareQuotes.com

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >