Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 16/3104 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • Data caching in ASP.Net applications

    - by nikolaosk
    In this post I will continue my series of posts on caching. You can read my other post in Output caching here .You can read on how to cache a page depending on the user's browser language. Output caching has its place as a caching mechanism. But right now I will focus on data caching .The advantages of data caching are well known but I will highlight the main points. We have improvements in response times We have reduced database round trips We have different levels of caching and it is up to us...(read more)

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Big DataData Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Test interface implementation

    - by Michael
    I have a interface in our code base that I would like to be able to mock out for unit testing. I am writing a test implementation to allow the individual tests to be able to override the specific methods they are concerned with rather than implementing every method. I've run into a quandary over how the test implementation should behave if the test fails to override a method used by the method under test. Should I return a "non-value" (0, null) in the test implementation or throw a UnsupportedOperationException to explicitly fail the test?

    Read the article

  • Data Masking for Oracle E-Business Suite

    - by Troy Kitch
    E-Business Suite customers can now use Oracle Data Masking to obscure sensitive information in non-production environments. Many organizations are inadvertently exposed when copying sensitive or regulated production data into non-production database environments for development, quality assurance or outsourcing purposes. Due to weak security controls and unmonitored access, these non-production environments have increasingly become the target of cyber criminals. Learn more about the announcement here.

    Read the article

  • Extracting Data from a Source System to History Tables

    - by Derek D.
    This is a topic I find very little information written about, however it is very important that the method for extracting data be done in a way that does not hinder performance of the source system.  In this example, the goal is to extract data from a source system, into another database (or server) all [...]

    Read the article

  • Importing Excel data into SSIS 2008 using Data Conversion Transformation

    Despite its benefits, SQL Server Integration Services Import Export Wizard has a number of limitations, resulting in part from a new set of rules that eliminate implicit data type conversion mechanisms present in Data Transformation Services. This article discusses a method that addresses such limitations, focusing in particular on importing the content of Excel spreadsheets into SQL Server.

    Read the article

  • Design patterns to avoiding breaking the SRP while performing heavy data logging

    - by Kazark
    A class that performs both computations and data logging seems to have at least two responsibilities. Given a system for which the specifications require heavy data logging, what kind of design patterns or architectural patterns can be used to avoid bloating all the classes with logging calls every time they compute something? The decorator pattern be used (e.g. Interpolator decorated to LoggingInterpolator), but it seems that would result in a situation hardly more desirable in which almost every major class would need to be decorated with logging.

    Read the article

  • Ma este Oracle Data Mining újdonságok webcast!

    - by Fekete Zoltán
    2010. május 12-én szerdán 18 órakor a böngészonkkel kapcsolódva a következo roppant érdekes eloadást hallgathatjuk meg az Oracle BIWA keretében: BIWA SIG TechCast Series - May 12 - Data Mining Made Easy, az eloadó Charlie Berger, az Oracle adatbányászati vezetoje. Könnyen elvégezheto adatbányászat! Az Oracle Data Miner 11g Release 2 új "Work flow" grafikus felületének bevezetése. Csatlakozni az Oracle BIWA-hoz a ezen a linken ingyenesen lehet. Itt találhatjuk meg, hogyan lehet meghallgatni ezt a konferenciát: www.oraclebiwa.org

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • XMLHttpRequest not working, trying to test database connection [closed]

    - by Frederick Marcoux
    I'm currently creating my own CMS for personnal use but I'm blocked at a code. I'm trying to make a installation script but the AJAX request to test if database works, doesn't work... There's my JS code: function testDB() { "use strict"; var host = document.getElementById('host').value; var username = document.getElementById('username').value; var password = document.getElementById('password').value; var db = document.getElementById('db_name').value; var xmlhttp = new XMLHttpRequest(); var url = "test_db.php"; var params = "host="+host+"&username="+username+"&password="+password+"&db="+db; xmlhttp.open("POST", url, true); xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", params.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.send(params); $('#loader').removeAttr('style'); if (xmlhttp.responseText !== '') { if (xmlhttp.readyState===4 && xmlhttp.status===200) { $('#next').removeAttr('disabled'); $('#test').attr('disabled', 'disabled'); $('#test').text('Connection Successful!'); $('#test').addClass('btn-success'); $('#login').addClass('success'); $('#login1').addClass('success'); $('#db').addClass('success'); $('#loader').attr('style', 'display: none;'); } else { $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } else { $('#next').attr('disabled', 'disabled'); $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } And there's my PHP code: <?php $link = mysql_connect($_POST['host'], $_POST['username'], $_POST['password']); if (!$link) { echo ''; } else { if (mysql_select_db($_POST['db'])) { echo 'Connection Successful!'; } else { echo ''; } } mysql_close($link); ?> I don't know why it doesn't work but I tried with JQuery $.ajax, $.get, $.post but nothing work...

    Read the article

  • Classic vs universal Google analytics and loss of historical data

    - by iss42
    I'm keen to use some of the new features in Google Universal Analytics. I have an old site though that I don't want to lose the historical data for. The comparisons with historical data are interesting for example. However Google doesn't appear to allow you to change a property from the classic code to the new code. Am I missing something? I'm surprised this isn't a bigger issue for many other users.

    Read the article

  • Classic vs universal and loss of historical data

    - by iss42
    I'm keen to use some of the new features in Google Universal Analytics. I have an old site though that I don't want to lose the historical data for. The comparisons with historical data are interesting for example. However Google doesn't appear to allow you to change a property from the classic code to the new code. Am I missing something? I'm surprised this isn't a bigger issue for many other users.

    Read the article

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • rails test.log is always empty

    - by Raiden
    All the log entries generated when running tests with 'rake' are written to my development.log instead of test.log file Do I have to explicitly enable logging for test in enviornments/test.config?? (I'm using 'turn' gem to format test output, Can that cause an issue?) I'm running rails 2.3.5, ruby 1.8.7 I've all these gems installed for RAILS_ENV=test. Any help is appreciated. -[I] less -[I] treetop = 1.4.2 - [I] polyglot = 0.2.5 - [I] mutter = 0.4.2 - [I] mysql - [I] authlogic - [R] activesupport - [I] turn - [I] ansi = 1.1.0 - [I] facets = 2.8.0 - [I] rspec = 1.2.0 - [I] rspec-rails = 1.2.0 - [I] rspec = 1.3.0 - [R] rack = 1.0.0 - [I] webrat = 0.4.3 - [I] nokogiri = 1.2.0 - [R] rack = 1.0 - [I] rack-test = 0.5.3 - [R] rack = 1.0 - [I] cucumber = 0.2.2 - [I] term-ansicolor = 1.0.4 - [I] treetop = 1.4.2 - [I] polyglot = 0.2.5 - [I] polyglot = 0.2.9 - [R] builder = 2.1.2 - [I] diff-lcs = 1.1.2 - [R] json_pure = 1.2.0 - [I] cucumber-rails - [I] cucumber = 0.6.2 - [I] term-ansicolor = 1.0.4 - [I] treetop = 1.4.2 - [I] polyglot = 0.2.5 - [I] polyglot = 0.2.9 - [R] builder = 2.1.2 - [I] diff-lcs = 1.1.2 - [R] json_pure = 1.2.0 - [I] database_cleaner = 0.2.3 - [I] launchy - [R] rake = 0.8.1 - [I] configuration = 0.0.5 - [I] faker - [I] populator - [R] flog = 2.1.0 - [R] flay - [I] rcov - [I] reek - [R] ruby_parser ~ 2.0 - [I] ruby2ruby ~ 1.2 - [R] sexp_processor ~ 3.0 - [R] ruby_parser ~ 2.0 - [R] sexp_processor ~ 3.0 - [I] roodi - [R] ruby_parser - [I] gruff - [I] rmagick - [I] ruby-prof - [R] jscruggs-metric_fu = 1.1.5 - [I] factory_girl - [I] notahat-machinist

    Read the article

  • No test coverage files generated for Unit Test bundle in Xcode

    - by John Gallagher
    The Problem I've got a Cocoa project on the desktop and I'm using Xcode 3.2.1 on Snow Leopard 10.6.2. I want to generate code coverage files for my Unit Test Target in Xcode. What I've Tried As articles like this one suggest, I've adjusted the build settings to: “Generate Test Coverage Files” checked “Instrument Program Flow” checked “-lgcov” added to “Other Linker Flags” I've also set the Run Script section of the test target to have the following: # Run the unit tests in this test bundle. "${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" # Run gcov on the framework getting tested if [ "${CONFIGURATION}" = 'Coverage' ]; then FRAMEWORK_NAME=LapsusInterpretationEngine FRAMEWORK_OBJ_DIR=${OBJROOT}/${FRAMEWORK_NAME}.build/${CONFIGURATION}/EngineTests.build/Objects-normal/${NATIVE_ARCH} mkdir -p coverage pushd coverage find ${OBJROOT} -name *.gcda -exec gcov -o ${FRAMEWORK_OBJ_DIR} {} \; popd fi Since my Framework name is LapsusInterpretationEngine but my target is named EngineTests, I put this directly into the FRAMEWORK_OBJ_DIR but this didn't seem to help. I've tried cleaning before building. I've made sure all the above build settings apply to both the Unit Test Target and the Application Target. What I Get No .gcda or .gcno files anywhere in the build directory I'm using. I point CoverStory to the Objects-normal directory in my builds folder and it complains that there's nothing there for it to read. I must be doing something really obvious wrong. Anyone any ideas? I have tried the "EngineTests.build" directory being ${FRAMEWORK_NAME} and this gives the same results.

    Read the article

  • Referencing Entity from external data model - Core Data

    - by Ben Reeves
    I have a external library which includes a core data model, I would like to add a new entity to this model which has a relationship with one of the entities from the library. I know I could modify the original, but is there a way to without needing to pollute the library? I tried just creating a new model with an entity named the same, but that doesn't work: * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Can't merge models with two different entities named 'Host''

    Read the article

  • Rails performance tests "rake test:benchmark" and "rake test:profile" give me errors

    - by go minimal
    I'm trying to run a blank default performance test with Ruby 1.9 and Rails 2.3.5 and I just can't get it to work! What am I missing here??? rails testapp cd testapp script/generate scaffold User name:string rake db:migrate rake test:benchmark - /usr/local/bin/ruby19 -I"lib:test" "/usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/performance/browsing_test.rb" -- --benchmark Loaded suite /usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader Started /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in `rescue in const_missing': uninitialized constant BrowsingTest::STARTED (NameError) from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:94:in `const_missing' from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/testing/performance.rb:38:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:415:in `block (2 levels) in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `block in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:388:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:329:in `block in autorun' rake aborted! Command failed with status (1): [/usr/local/bin/ruby19 -I"lib:test" "/usr/l...]

    Read the article

  • Boost.Test: Looking for a working non-Trivial Test Suite Example / Tutorial

    - by Robert S. Barnes
    The Boost.Test documentation and examples don't really seem to contain any non-trivial examples and so far the two tutorials I've found here and here while helpful are both fairly basic. I would like to have a master test suite for the entire project, while maintaining per module suites of unit tests and fixtures that can be run independently. I'll also be using a mock server to test various networking edge cases. I'm on Ubuntu 8.04, but I'll take any example Linux or Windows since I'm writing my own makefiles anyways.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >