Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 148/327 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • How to implement Template Inheritance (like Django?) in PHP5

    - by anonymous coward
    Is there an existing good example, or how should one approach creating a basic Template system (thinking MVC) that supports "Template Inheritance" in PHP5? For an example of what I define as Template Inheritance, refer to the Django (a Python framework for web development) Templates documentation: http://docs.djangoproject.com/en/dev/topics/templates/#id1 I especially like the idea of PHP itself being the "template language", though it's not necessarily a requirement. If listing existing solutions that implement "Template Inheritance", please try to form answers as individual systems, for the benefit of 'popular vote'.

    Read the article

  • Grails benchmarks compared to other web MVC platform (Rails, Django, ASP MVC)?

    - by fabien7474
    I have been searching the web for recent benchmarks measuring Grails overall performance compared to its competitors (Rails, Django, ASP.NET MVC...), but I didn't find anything more recent than a 3 years-old article with obsolete grails version (0.5). See here and here. So, starting from grails 1.2, are there any more recent grails benchmarks you are aware of ? Or do you have your own performance tests for grails (compared to others if possible) ?

    Read the article

  • has any tools easy to download or uploaed data from gae ..

    - by zjm1126
    i find this: http://aralbalkan.com/1784 but it is : Gaebar is an easy-to-use, standalone Django application that you can plug in to your existing Google App Engine Django or app-engine-patch-based Django applications on Google App Engine to give them datastore backup and restore functionality. my app is not based on django,so did you know any tools esay to do this . thanks

    Read the article

  • Understanding and Controlling Parallel Query Processing in SQL Server

    Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them.

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Is this so bad when using MySQL queries in PHP?

    - by alex
    I need to update a lot of rows, per a user request. It is a site with products. I could... Delete all old rows for that product, then loop through string building a new INSERT query. This however will lose all data if the INSERT fails. Perform an UPDATE through each loop. This loop currently iterates over 8 items, but in the future it may get up to 15. This many UPDATEs doesn't sound like too good an idea. Change DB Schema, and add an auto_increment Id to the rows. Then first do a SELECT, get all old rows ids in a variable, perform one INSERT, and then a DELETE WHERE IN SET. What is the usual practice here? Thanks

    Read the article

  • How to select the top n from a union of two queries where the resulting order needs to be ranked by individual query?

    - by Jedidja
    Let's say I have a table with usernames: Id | Name ----------- 1 | Bobby 20 | Bob 90 | Bob 100 | Joe-Bob 630 | Bobberino 820 | Bob Junior I want to return a list of n matches on name for 'Bob' where the resulting set first contains exact matches followed by similar matches. I thought something like this might work SELECT TOP 4 a.* FROM ( SELECT * from Usernames WHERE Name = 'Bob' UNION SELECT * from Usernames WHERE Name LIKE '%Bob%' ) AS a but there are two problems: It's an inefficient query since the sub-select could return many rows (looking at the execution plan shows a join happening before top) (Almost) more importantly, the exact match(es) will not appear first in the results since the resulting set appears to be ordered by primary key. I am looking for a query that will return (for TOP 4) Id | Name --------- 20 | Bob 90 | Bob (and then 2 results from the LIKE query, e.g. 1 Bobby and 100 Joe-Bob) Is this possible in a single query?

    Read the article

  • Can I split an IEnumerable into two by a boolean criteria without two queries?

    - by SFun28
    Folks, Is it possible to split an IEnumerable into two IEnumerables using LINQ and only a single query/linq statement? In this way, I would avoid iterating through the IEnumerable twice. For example, is it possible to combine the last two statements below so allValues is only traversed once? IEnumerable<MyObj> allValues = ... List<MyObj> trues = allValues.Where( val => val.SomeProp ).ToList(); List<MyObj> falses = allValues.Where( val => !val.SomeProp ).ToList();

    Read the article

  • Get corresponding physical disk drives of mountpoints with WMI queries?

    - by Thomas
    Is there a way to retrieve a connection between a mountpoint (a volume which is mounted into the file system instead of mounted to a drive letter) and its belonging physical disk drive(s) with WMI? For example I have got a volume mountpoint on a W2K8 server which is mounted to “C:\Data\” and the mountpoint is spreaded on the physical disk drives 2, 4, and 5 of the server (the Data Management of the Server Manager shows that) but I cannot find a way to get this to know by using WMI. Volumes which have got a drive letter can be connected with the WMI-Classes Win32_DiskDrive -- Win32_DiskDriveToDiskPartition -- Win32_DiskPartition -- Win32_LogicalDiskToPartition -- Win32_LogicalDisk – but the problem is, that volume mountpoints aren’t listed in the class Win32_LogicalDisk, they are only listed in Win32_Volume. And I did not find a way to connect the class Win32_Volume with the class Win32_DiskDrive – there are missing some linking classes. Does anyone know a solution?

    Read the article

  • MySQL create stored procedure fails but all internal queries succeed alone?

    - by Mark
    Hi all, I just created a simple database in MySQL, and I am learning how to write stored proc's. I'm familiar with M$SQL and as far as I can see the following should work: use mydb; -- -------------------------------------------------------------------------------- -- Routine DDL -- -------------------------------------------------------------------------------- DELIMITER // CREATE PROCEDURE mydb.doStats () BEGIN CREATE TABLE IF NOT EXISTS resultprobability ( ballNumber INT NOT NULL , probability FLOAT NULL, PRIMARY KEY (ballNumber) ); CREATE TABLE IF NOT EXISTS drawProbability ( drawDate DATE NOT NULL , ball1 INT NULL , ball2 INT NULL , ball3 INT NULL , ball4 INT NULL , ball5 INT NULL , ball6 INT NULL , ball7 INT NULL , score FLOAT NULL , PRIMARY KEY (drawDate) ); TRUNCATE TABLE resultprobability; TRUNCATE TABLE drawprobability; INSERT INTO resultprobability (ballNumber, probability) (select resultset.ballNumber ballNumber,(count(0)/(select count(0) from resultset)) probability from resultset group by resultset.ballNumber); INSERT INTO drawProbability (drawDate, ball1, ball2, ball3, ball4, ball5, ball6, ball7, score) (select distinct r.drawDate, a.ballnumber ball1, b.ballnumber ball2, c.ballnumber ball3, d.ballnumber ball4, e.ballnumber ball5, f.ballnumber ball6,g.ballnumber ball7, ((a.probability + b.probability + c.probability + d.probability + e.probability + f.probability + g.probability)/7) score from resultset r inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 1) a on a.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 2) b on b.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 3) c on c.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 4) d on d.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 5) e on e.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 6) f on f.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 7) g on g.drawdate = r.drawDate order by score desc); END // DELIMITER ; instead i get the following Executed successfully in 0.002 s, 0 rows affected. Line 1, column 1 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 26 Line 6, column 1 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')) probability from resultset group by resultset.ballNumber); INSERT INTO d' at line 1 Line 31, column 51 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') score from resultset r inner join (select r.drawDate, r.ballNumber, p.probabi' at line 1 Line 39, column 114 Execution finished after 0.002 s, 3 error(s) occurred. What am I doing wrong? I seem to have exhausted my limited mental abilities!

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Recommend good shared hosting [closed]

    - by Django Reinhardt
    It seems that everyone has something bad to say about the "big" shared hosting sites like 1and1, HostGator, GoDaddy, etc. but what are the ones you've had GOOD experiences with? I'm going to focus this question on LAMP stacks, given that they're the most popular option for shared hosting, but if you have an especially good experience with a different stack. Good shared hosting should be: Competitively priced - But not at the expense of... Fully featured - Email, PHP, MySQL, but what else? Highly customizable - Do you have access to advanced features like being able to deliver static content? Up to date - Do they run PHP4 as standard, or do they run the latest version? Customer service - When you have a problem are they rude and unhelpful? Do they take ages to reply? So how about it? Who have YOU have a good experience with?

    Read the article

  • 1and1: Unable to host an external domain

    - by Django Reinhardt
    I'm sorry if this isn't the right place for this question, but I'm presently having difficulties with my hosting provider (1and1). Two weeks ago, two of my clients bought hosting from them on my recommendation, but as it turned out, 1and1 are having severe technical difficulties. Right now non of their hosting packages are able to accept ANY external domains. So either you pay the costs of transferring the registrar of your domain, or you use the ugly 1and1 domain name. Not any good for a hosting company of 1and1's reputation! They have been promising me for two weeks that they're going to fix the problem, but as you have probably guessed by now, that hasn't been the case. I would like to know if a) Anyone else is in the same boat as me, and b) If there are other comparably reputable hosting providers that I should consider moving to instead? Very disappointing! :( Note: This is for 1and1 in the UK. I imagine it isn't affecting users in other countries(?) Clarification: 1and1 are unable to accept ANY external domains. That means that even if you update your DNS details on your domain, their system cannot be updated to add your external domain to your account.

    Read the article

  • MySQL & PHP - select/option lists and showing data to users that still allows me to generate queries

    - by Andrew Heath
    Sorry for the unclear title, an example will clear things up: TABLE: Scenario_victories ID scenid timestamp userid side playdate 1 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 2 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 3 RtBr001 2010-03-15 17:13:51 7 2 2010-03-10 ID and timestamp are auto-insertions by the database when the other 4 fields are added. The first thing to note is that a user can record multiple playings of the same scenario (scenid) on the same date (playdate) possibly with the same outcome (side = winner). Hence the need for the unique ID and timestamps for good measure. Now, on their user page, I'm displaying their recorded play history in a <select><option>... list form with 2 buttons at the end - Delete Record and Go to Scenario My script takes the scenid and after hitting a few other tables returns with something more user-friendly like: (playdate) (from scenid) (from side) ######################################################### # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Soviet Union won # ######################################################### [Delete Record] [Go To Scenario] in HTML: <select name="history" size=3> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Soviet Union won</option> </select> Now, if you were to highlight the first record and click Go to Scenario there is enough information there for me to parse it and produce the exact scenario you want to see. However, if you were to select Delete Record there is not - I have the playdate and I can parse the scenid and side from what's listed, but in this example all three records would have the same result. I appear to have painted myself into a corner. Does anyone have a suggestion as to how I can get some unique identifying data (ID and/or timestamp) to ride along on this form without showing it to the user? PHP-only please, I must be NoScript compliant!

    Read the article

  • Complex Rails queries across multiple tables, unions, and will_paginate. Solved.

    - by uberllama
    Hi folks. I've been working on a complex "user feed" type of functionality for a while now, and after experimenting with various union plugins, hacking named scopes, and brute force, have arrived at a solution I'm happy with. S.O. has been hugely helpful for me, so I thought I'd post it here in hopes that it might help others and also to get feedback -- it's very possible that I worked on this so long that I walked down an unnecessarily complicated road. For the sake of my example, I'll use users, groups, and articles. A user can follow other users to get a feed of their articles. They can also join groups and get a feed of articles that have been added to those groups. What I needed was a combined, pageable feed of distinct articles from a user's contacts and groups. Let's begin. user.rb has_many :articles has_many :contacts has_many :contacted_users, :through => :contacts has_many :memberships has_many :groups, :through => :memberships contact.rb belongs_to :user belongs_to :contacted_user, :class_name => "User", :foreign_key => "contacted_user_id" article.rb belongs_to :user has_many :submissions has_many :groups, :through => :submissions group.rb has_many :memberships has_many :users, :through => :memberships has_many :submissions has_many :articles, :through => :submissions Those are the basic models that define my relationships. Now, I add two named scopes to the Article model so that I can get separate feeds of both contact articles and group articles should I desire. article.rb # Get all articles by user's contacts named_scope :by_contacts, lambda {|user| {:joins => "inner join contacts on articles.user_id = contacts.contacted_user_id", :conditions => ["articles.published = 1 and contacts.user_id = ?", user.id]} } # Get all articles in user's groups. This does an additional query to get the user's group IDs, then uses those in an IN clause named_scope :by_groups, lambda {|user| {:select => "DISTINCT articles.*", :joins => :submissions, :conditions => {:submissions => {:group_id => user.group_ids}}} } Now I have to create a method that will provide a UNION of these two feeds into one. Since I'm using Rails 2.3.5, I have to use the construct_finder_sql method to render a scope into its base sql. In Rails 3.0, I could use the to_sql method. user.rb def feed "(#{Article.by_groups(self).send(:construct_finder_sql,{})}) UNION (#{Article.by_contacts(self).send(:construct_finder_sql,{})})" end And finally, I can now call this method and paginate it from my controller using will_paginate's paginate_by_sql method. HomeController.rb @articles = Article.paginate_by_sql(current_user.feed, :page => 1) And we're done! It may seem simple now, but it was a lot of work getting there. Feedback is always appreciated. In particular, it would be great to get away from some of the raw sql hacking. Cheers.

    Read the article

  • How can I speed up queries against tables I cannot add indexes to?

    - by RenderIn
    I access several tables remotely via DB Link. They are very normalized and the data in each is effective-dated. Of the millions of records in each table, only a subset of ~50k are current records. The tables are internally managed by a commercial product that will throw a huge fit if I add indexes or make alterations to its tables in any way. What are my options for speeding up access to these tables?

    Read the article

  • How to search phrase queries in inverted index structure?

    - by Mehdi Amrollahi
    If we want to search a query like this "t1 t2 t3" (t1,t2 ,t3 must be queued) in an inverted index structure , which ways could we do ? 1-First we search the "t1" term and find all documents that contains "t1" , then do this work for "t2" and then "t3" . Then find documents that positions of "t1" , "t2" and "t3" are next to each other . 2-First we search the "t1" term and find all documents that contains "t1" , then in all documents that we found , we search the "t2" and next , in the result of this , we find documents that contains "t3" . thanks .

    Read the article

  • fitnesse test framework, arbitrary properties for test and queries/test runs based on them?

    - by Marcel
    hi, our testers have the requirement to store multiple properties for a test that are not present in the "properties". e.g. they want to store priority, a description(not in the wiki page itself) and so on. they don't want to use the tagging mechanism. is there a way to store any kind of new xml node in the properties.xml for a test? these properties should then be used to: query the fields via the search screen run tests based on the "SuiteResponder" ?suite=xxx&TAGx=abc&TAGy=cde they should be returned by "?properties" responder. they should appear in the test history of the test run in essence they want to store any kind of "meta" information in the properties.xml and work with them in all kinds of ways, search, run etc. does anybody here know if there is already something available in that direction? if not i think we have to "pimp" these features into fitnesse to make our testers happy. thanks a lot any help appreciated marcel ps: i've also posted the question in the yahoo fitnesse group

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >