Search Results

Search found 38931 results on 1558 pages for 'database testing'.

Page 97/1558 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • Testing Routes in ASP.NET MVC with MvcContrib

    - by Guilherme Cardoso
    I've decide to write about unit testing in the next weeks. If we decide to develop with Test-Driven Developement pattern, it's important to not forget the routes. This article shows how to test routes. I'm importing my routes from my RegisterRoutes method from the Global.asax of Project.Web created by default (in SetUp). I'm using ShouldMapTp() from MvcContrib: http://mvccontrib.codeplex.com/ The controller is specified in the ShouldMapTo() signature, and we use lambda expressions for the action and parameters that are passed to that controller. [SetUp] public void Setup() { Project.Web.MvcApplication.RegisterRoutes(RouteTable.Routes); } [Test] public void Should_Route_HomeController() { "~/Home" .ShouldMapTo<HomeController>(action => action.Index()); } [Test] public void Should_Route_EventsController() { "~/Events" .ShouldMapTo<EventsController>(action => action.Index()); "~/Events/View/44/Concert-DevaMatri-22-January-" .ShouldMapTo<EventosController>(action => action.Read(1, "Title")); // In this example,44 is the Id for my Event and "Concert-DevaMatri-22-January" is the title for that Event } [TearDown] public void teardown() { RouteTable.Routes.Clear(); }

    Read the article

  • A Reusable Builder Class for Javascript Testing

    - by Liam McLennan
    Continuing on my series of builders for C# and Ruby here is the solution in Javascript. This is probably the implementation with which I am least happy. There are several parts that did not seem to fit the language. This time around I didn’t bother with a testing framework, I just append some values to the page with jQuery. Here is the test code: var initialiseBuilder = function() { var builder = builderConstructor(); builder.configure({ 'Person': function() { return {name: 'Liam', age: 26}}, 'Property': function() { return {street: '127 Creek St', manager: builder.a('Person') }} }); return builder; }; var print = function(s) { $('body').append(s + '<br/>'); }; var build = initialiseBuilder(); // get an object liam = build.a('Person'); print(liam.name + ' is ' + liam.age); // get a modified object liam = build.a('Person', function(person) { person.age = 999; }); print(liam.name + ' is ' + liam.age); home = build.a('Property'); print(home.street + ' manager: ' + home.manager.name); and the implementation: var builderConstructor = function() { var that = {}; var defaults = {}; that.configure = function(d) { defaults = d; }; that.a = function(type, modifier) { var o = defaults[type](); if (modifier) { modifier(o); } return o; }; return that; }; I still like javascript’s syntax for anonymous methods, defaults[type]() is much clearer than the Ruby equivalent @defaults[klass].call(). You can see the striking similarity between Ruby hashes and javascript objects. I also prefer modifier(o) to the equivalent Ruby, yield o.

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Service Testing made easy with SO-Aware Test Workbench

    - by cibrax
    I happy to announce today a new addition to our SO-Aware service repository toolset, SO-Aware Test Workbench, a WPF desktop application for doing functional and load testing against existing WCF Services. This tool is completely integrated to the SO-Aware service repository, which makes configuring new load and functional tests for WCF Soap and REST services a breeze. From now on, the service repository can play a very important role in an organization by facilitating collaboration between developers and testers. Developers can create and register new services in the repository with all the related artifacts like configuration. On the other hand, Testers can just pick one of the existing services in the repository and create functional or load tests from there, with no need to deal with specific details of the service implementation, location or configuration settings. Developers and Testers can later use the result of those tests to modify the services or adjust different settings on the tests or service configuration. Gustavo Machado, one of the developers behind this project, has written an excellent post describing all the functionality that can find today in the tool. You can also see the tool in action in this Endpoint Tv episode with Jesus and Ron Jacobs.

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • ?Oracle DB 11gR2 ?????????????????????????????????????!

    - by Yuichi.Hayashi
    Oracle Database????????????????????????????????????????????????????????????????????????·?????????????????????? ?????????/?????????????????????????????????????11g R2??????????????! ????????? Oracle Database?????????????????????????????????????CPU????·???????????????????????????????????????????????? ?????????????????????????????????????????????? ??????????????????????????????(?????????????·???????)????????????????????????????????????????????????????·?????????????????????????????????????????? ?????????????????????????????????????? ????11g R2??????????????????????????????????????????????? ????????????????? ???????????????????????????????Oracle?????????????????????????????????????? ???????????????????CPU????????????????????????????????????????! Oracle Database????????????????????????????? Oracle ?????? - ??(??), ??, ????? Database Smart Flash Cache Oracle Database??Hard Disk Drive(HDD)???????????????????????SQL??????????????????????????????????????????????????·?????·?????100%??????????????????????????????? ?????????????·??????????CPU????·???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????(????·?????·????????)???????·??????????????????????????? HDD????????????????(???)?????????????????????HDD??IO?????????????????????????? ?????????Solid State Drive/Device(SSD)??? SSD?HDD??????????????????????????????????OLTP????????·??????????????????????????????????????????????????????SSD???????????????????????????? ???11g R2???????????SSD???????????Database Smart Flash Cache????????????Database Smart Flash Cache??????·???????????????????????????(Hot Data)?Oracle?????SSD??????????? ????????!?????????SSD????????????????????SSD????????????????????? SSD???????????????????????????????????? Database Smart Flash Cache?????????????????????? SSD???Oracle???: Database Smart Flash Cache - ??(??), ??, ????? ?????????? ? ???????????????/????????????????!? ? ???????????????????????????????????!?

    Read the article

  • Unable to use nMock GetProperty routine on a property of an inherited object...

    - by Chris
    I am getting this error when trying to set an expectation on an object I mocked that inherits from MembershipUser: ContactRepositoryTests.UpdateTest : FailedSystem.InvalidProgramException: JIT Compiler encountered an internal limitation. Server stack trace: at MockObjectType1.ToString() Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref MessageData msgData, Int32 type) at System.Object.ToString() at NMock2.Internal.ExpectationBuilder.On(Object receiver) Here are the tools I am using... VS2008 (SP1) Framework 3.5 nUnit 2.4.8 nMock 2.0.0.44 Resharper 4.1 I am at a loss as to why this would be happening. Any help would be appreciated. Test Class... [TestFixture] public class AddressRepositoryTests { private Mockery m_Mockery; private Data.IAddress m_MockDataAddress; private IUser m_MockUser; [SetUp] public void Setup() { m_Mockery = new Mockery(); m_MockDataAddress = m_Mockery.NewMock<Data.IAddress>(); m_MockUser = m_Mockery.NewMock<IUser>(); } [TearDown] public void TearDown() { m_Mockery.Dispose(); } [Test] public void CreateTest() { string line1 = "unitTestLine1"; string line2 = "unitTestLine2"; string city = "unitTestCity"; int stateId = 1893; string postalCode = "unitTestPostalCode"; int countryId = 223; bool active = false; int createdById = 1; Expect.Once .On(m_MockUser) .GetProperty("Identity") .Will(Return.Value(createdById)); Expect.Once .On(m_MockDataAddress) .Method("Insert") .With( line1, line2, city, stateId, postalCode, countryId, active, createdById, Is.Anything ) .Will(Return.Value(null)); IAddressRepository addressRepository = new AddressRepository(m_MockDataAddress); IAddress address = addressRepository.Create( line1, line2, city, stateId, postalCode, countryId, active, m_MockUser ); Assert.IsNull(address); } } User Class... public interface IUser { int? Identity { get; set; } int? CreatedBy { get; set; } DateTime CreatedOn { get; set; } int? ModifiedBy { get; set; } DateTime? ModifiedOn { get; set; } string UserName { get; } object ProviderUserKey { get; } string Email { get; set; } string PasswordQuestion { get; } string Comment { get; set; } bool IsApproved { get; set; } bool IsLockedOut { get; } DateTime LastLockoutDate { get; } DateTime CreationDate { get; } DateTime LastLoginDate { get; set; } DateTime LastActivityDate { get; set; } DateTime LastPasswordChangedDate { get; } bool IsOnline { get; } string ProviderName { get; } string ToString(); string GetPassword(); string GetPassword(string passwordAnswer); bool ChangePassword(string oldPassword, string newPassword); bool ChangePasswordQuestionAndAnswer(string password, string newPasswordQuestion, string newPasswordAnswer); string ResetPassword(string passwordAnswer); string ResetPassword(); bool UnlockUser(); } public class User : MembershipUser, IUser { #region Public Properties private int? m_Identity; public int? Identity { get { return m_Identity; } set { if (value <= 0) throw new Exception("Address.Identity must be greater than 0."); m_Identity = value; } } public int? CreatedBy { get; set; } private DateTime m_CreatedOn = DateTime.Now; public DateTime CreatedOn { get { return m_CreatedOn; } set { m_CreatedOn = value; } } public int? ModifiedBy { get; set; } public DateTime? ModifiedOn { get; set; } #endregion Public Properties #region Public Constructors public User() { } #endregion Public Constructors } Address Class... public interface IAddress { int? Identity { get; set; } string Line1 { get; set; } string Line2 { get; set; } string City { get; set; } string PostalCode { get; set; } bool Active { get; set; } int? CreatedBy { get; set; } DateTime CreatedOn { get; set; } int? ModifiedBy { get; set; } DateTime? ModifiedOn { get; set; } } public class Address : IAddress { #region Public Properties private int? m_Identity; public int? Identity { get { return m_Identity; } set { if (value <= 0) throw new Exception("Address.Identity must be greater than 0."); m_Identity = value; } } public string Line1 { get; set; } public string Line2 { get; set; } public string City { get; set; } public string PostalCode { get; set; } public bool Active { get; set; } public int? CreatedBy { get; set; } private DateTime m_CreatedOn = DateTime.Now; public DateTime CreatedOn { get { return m_CreatedOn; } set { m_CreatedOn = value; } } public int? ModifiedBy { get; set; } public DateTime? ModifiedOn { get; set; } #endregion Public Properties } AddressRepository Class... public interface IAddressRepository { IAddress Create(string line1, string line2, string city, int stateId, string postalCode, int countryId, bool active, IUser createdBy); } public class AddressRepository : IAddressRepository { #region Private Properties private Data.IAddress m_DataAddress; private Data.IAddress DataAddress { get { if (m_DataAddress == null) m_DataAddress = new Data.Address(); return m_DataAddress; } set { m_DataAddress = value; } } #endregion Private Properties #region Public Constructor public AddressRepository() { } public AddressRepository(Data.IAddress dataAddress) { DataAddress = dataAddress; } #endregion Public Constructor #region Public Methods public IAddress Create(string line1, string line2, string city, int stateId, string postalCode, int countryId, bool active, IUser createdBy) { if (String.IsNullOrEmpty(line1)) throw new Exception("You must enter a Address Line 1 to register."); if (String.IsNullOrEmpty(city)) throw new Exception("You must enter a City to register."); if (stateId <= 0) throw new Exception("You must select a State to register."); if (String.IsNullOrEmpty(postalCode)) throw new Exception("You must enter a Postal Code to register."); if (countryId <= 0) throw new Exception("You must select a Country to register."); DataSet dataSet = DataAddress.Insert( line1, line2, city, stateId, postalCode, countryId, active, createdBy.Identity, DateTime.Now ); return null; } #endregion Public Methods } DataAddress Class... public interface IAddress { DataSet GetByAddressId (int? AddressId); DataSet Update (int? AddressId, string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, Guid? ModifiedBy); DataSet Insert (string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, int? CreatedBy, DateTime? CreatedOn); } public class Address : IAddress { public DataSet GetByAddressId (int? AddressId) { Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand("prAddress_GetByAddressId"); DataSet dataSet; try { database.AddInParameter(dbCommand, "AddressId", DbType.Int32, AddressId); dataSet = database.ExecuteDataSet(dbCommand); } catch (SqlException sqlException) { string callMessage = "prAddress_GetByAddressId " + "@AddressId = " + AddressId; throw new Exception(callMessage, sqlException); } return dataSet; } public DataSet Update (int? AddressId, string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, Guid? ModifiedBy) { Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand("prAddress_Update"); DataSet dataSet; try { database.AddInParameter(dbCommand, "AddressId", DbType.Int32, AddressId); database.AddInParameter(dbCommand, "Address1", DbType.AnsiString, Address1); database.AddInParameter(dbCommand, "Address2", DbType.AnsiString, Address2); database.AddInParameter(dbCommand, "City", DbType.AnsiString, City); database.AddInParameter(dbCommand, "StateId", DbType.Int32, StateId); database.AddInParameter(dbCommand, "PostalCode", DbType.AnsiString, PostalCode); database.AddInParameter(dbCommand, "CountryId", DbType.Int32, CountryId); database.AddInParameter(dbCommand, "IsActive", DbType.Boolean, IsActive); database.AddInParameter(dbCommand, "ModifiedBy", DbType.Guid, ModifiedBy); dataSet = database.ExecuteDataSet(dbCommand); } catch (SqlException sqlException) { string callMessage = "prAddress_Update " + "@AddressId = " + AddressId + ", @Address1 = " + Address1 + ", @Address2 = " + Address2 + ", @City = " + City + ", @StateId = " + StateId + ", @PostalCode = " + PostalCode + ", @CountryId = " + CountryId + ", @IsActive = " + IsActive + ", @ModifiedBy = " + ModifiedBy; throw new Exception(callMessage, sqlException); } return dataSet; } public DataSet Insert (string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, int? CreatedBy, DateTime? CreatedOn) { Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand("prAddress_Insert"); DataSet dataSet; try { database.AddInParameter(dbCommand, "Address1", DbType.AnsiString, Address1); database.AddInParameter(dbCommand, "Address2", DbType.AnsiString, Address2); database.AddInParameter(dbCommand, "City", DbType.AnsiString, City); database.AddInParameter(dbCommand, "StateId", DbType.Int32, StateId); database.AddInParameter(dbCommand, "PostalCode", DbType.AnsiString, PostalCode); database.AddInParameter(dbCommand, "CountryId", DbType.Int32, CountryId); database.AddInParameter(dbCommand, "IsActive", DbType.Boolean, IsActive); database.AddInParameter(dbCommand, "CreatedBy", DbType.Int32, CreatedBy); database.AddInParameter(dbCommand, "CreatedOn", DbType.DateTime, CreatedOn); dataSet = database.ExecuteDataSet(dbCommand); } catch (SqlException sqlException) { string callMessage = "prAddress_Insert " + "@Address1 = " + Address1 + ", @Address2 = " + Address2 + ", @City = " + City + ", @StateId = " + StateId + ", @PostalCode = " + PostalCode + ", @CountryId = " + CountryId + ", @IsActive = " + IsActive + ", @CreatedBy = " + CreatedBy + ", @CreatedOn = " + CreatedOn; throw new Exception(callMessage, sqlException); } return dataSet; } }

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Normalisation and 'Anima notitia copia' (Soul of the Database)

    - by Phil Factor
    (A Guest Editorial for Simple-Talk) The other day, I was staring  at the sys.syslanguages  table in SQL Server with slightly-raised eyebrows . I’d just been reading Chris Date’s  interesting book ‘SQL and Relational Theory’. He’d made the point that you’re not necessarily doing relational database operations by using a SQL Database product.  The same general point was recently made by Dino Esposito about ASP.NET MVC.  The use of ASP.NET MVC doesn’t guarantee you a good application design: It merely makes it possible to test it. The way I’d describe the sentiment in both cases is ‘you can hit someone over the head with a frying-pan but you can’t call it cooking’. SQL enables you to create relational databases. However,  even if it smells bad, it is no crime to do hideously un-relational things with a SQL Database just so long as it’s necessary and you can tell the difference; not only that but also only if you’re aware of the risks and implications. Naturally, I’ve never knowingly created a database that Codd would have frowned at, but around the edges are interfaces and data feeds I’ve written  that have caused hissy fits amongst the Normalisation fundamentalists. Part of the problem for those who agonise about such things  is the misinterpretation of Atomicity.  An atomic value is one for which, in the strange virtual universe you are creating in your database, you don’t have any interest in any of its component parts.  If you aren’t interested in the electrons, neutrinos,  muons,  or  taus, then  an atom is ..er.. atomic. In the same way, if you are passed a JSON string or XML, and required to store it in a database, then all you need to do is to ask yourself, in your role as Anima notitia copia (Soul of the database) ‘have I any interest in the contents of this item of information?’.  If the answer is ‘No!’, or ‘nequequam! Then it is an atomic value, however complex it may be.  After all, you would never have the urge to store the pixels of images individually, under the misguided idea that these are the atomic values would you?  I would, of course,  ask the ‘Anima notitia copia’ rather than the application developers, since there may be more than one application, and the applications developers may be designing the application in the absence of full domain knowledge, (‘or by the seat of the pants’ as the technical term used to be). If, on the other hand, the answer is ‘sure, and we want to index the XML column’, then we may be in for some heavy XML-shredding sessions to get to store the ‘atomic’ values and ensure future harmony as the application develops. I went back to looking at the sys.syslanguages table. It has a months column with the months in a delimited list January,February,March,April,May,June,July,August,September,October,November,December This is an ordered list. Wicked? I seem to remember that this value, like shortmonths and days, is treated as a ‘thing’. It is merely passed off to an external  C++ routine in order to format a date in a particular language, and never accessed directly within the database. As far as the database is concerned, it is an atomic value.  There is more to normalisation than meets the eye.

    Read the article

  • Backup SQL Database Federation

    - by Herve Roggero
    One of the amazing features of Windows Azure SQL Database is the ability to create federations in order to scale your cloud databases. However until now, there were very few options available to backup federated databases. In this post I will show you how Enzo Cloud Backup can help you backup, and restore your federated database easily. You can restore federated databases in SQL Database, or even on SQL Server (as regular databases). Generally speaking, you will need to perform the following steps to backup and restore the federations of a SQL Database: Backup the federation root Backup the federation members Restore the federation root Restore the federation members These actions can be automated using: the built-in scheduler of Enzo Cloud Backup, the command-line utilities, or the .NET Cloud Backup API provided, giving you complete control on how you want to perform your backup and restore operations. Backing up federations Let’s look at the tool to backup federations. You can explore your existing federations by using the Enzo Cloud Backup application as shown below. As you can see, the federation root and the various federations available are shown in separate tabs for convenience. You would first need to backup the federation root (unless you intend to restore the federation member on a local SQL Server database and you don’t need what’s in the federation root). The steps are similar than those to backup a federation member, so let’s proceed to backing up a federation member. You can click on a specific federation member to view the database details by clicking at the tab that contains your federation member. You can see the size currently consumed and a summary of its content at the bottom of the screen. If you right-click on a specific range, you can choose to backup the federation member. This brings up a window with the details of the federation member already filled out for you, including the value of the member that is used to select the federation member. Notice that the list of Federations includes “Federation Root”, which is what you need to select to backup the federation root (you can also do that directly from the root database tab).  Once you provide at least one backup destination, you can begin the backup operation.  From this window, you can also schedule this operation as a job and perform this operation entirely in the cloud. You can also “filter” the connection, so that only the specific member value is backed up (this will backup all the global tables, and only the records for which the distribution value is the one specified). You can repeat this operation for every federation member in your federation. Restoring Federations Once backed up, you can restore your federations easily. Select the backup device using the tool, then select Restore. The following window will appear. From here you can create a new root database. You can also view the backup properties, showing you exactly which federations will be created. Under the Federations tab, you can select how the federations will be created. I chose to recreate the federations and let the tool perform all the SPLIT operations necessary to recreate the same number of federation members. Other options include to create the first federation member only, or not to create the federation members at all. Once the root database has been restored and the federation members have been created, you can restore the federation members you previously backed up. The screen below shows you how to restore a backup of a federation member into a specific federation member (the details of the federation member are provided to make it easier to identify). Conclusion This post gave you an overview on how to backup and restore federation roots and federation members. The backup operations can be setup once, then scheduled daily.

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • Read a text file and transfer contents to mysql database

    - by Jack Brown
    I need a php script to read a .txt file. The content of the text file are like this: data.txt 145|Joe Blogs|17/03/1954 986|Jim Smith|12/01/1976 234|Paul Jones|19/07/1923 098|James Smith|12/09/1998 234|Carl Jones|01/01/1925 These would then get stored into a database like this DataID |Name |DOB 234 |Carl Jones|01/01/1925 I would be so grateful if someone could give me script to achieve this.

    Read the article

  • Problem in importing database in MySQL

    - by Krt_Malta
    I have a .sql file with some database backups inside. Now I want to restore them back to MySQL. How can I this using command line of MySqL please? I found this: mysql -u username -p -h localhost database_name < dumpfile.sql but I don't know what username should be, what database_name should be and how I could browse to a .sql file in another folder.

    Read the article

  • ASP.NET Routing - load routes from database?

    - by ropstah
    Is it possible to load routes from the database with ASP.NET ? For each r as SomeRouteObject in RouteDataTable routes.MapRoute( _ r.Name, _ r.RouteUri, _ r.RouteValues, _ //?? r.Constraints _ //?? ) Next How should I store the routevalues / constraints? I understand that there are several 'default' routevalues like .Controller and .Action, however I also need entirely custom ones like .Id or .Page...

    Read the article

  • Denormalization database

    - by Pedro Magalhaes
    I was taking a look at SSB (Star Schema Benchmark -http://www.percona.com/docs/wiki/_media/benchmark:ssb:starschemab.pdf) and then i was thinking if is possible to denormalize all tables from the SSB? So database size will increase a lot but potencially the performance will grow up. Is that right? Is It possible? Thanks and sorry for my poor english

    Read the article

  • Covering Index versus Clustered Index (Database Index)

    - by Mestika
    Hi, I'm working on a database system and it's indexes, but I'm having a really hard time seing the clear difference between a covering index and a clustered index. I've googled my way around but hasn't got a clear cut answer on: What is the differences between the two types of indexes When do I use Covering index and when do I use Clustered index. I hope someone can explain it to me in a almost children-like answer :-) Sincerely Mestika By the way, I'm using IBM DB2 version 9.7

    Read the article

  • Designing a game database

    - by Ronald
    I'm trying to design a database to record game histories for a game I'm working on. I have 3 different tables: users, gamehistories, and gamescores. Columns for tables: users: uid, displayname, email gamehistories: gid, pubID, start (datetime), end (datetime) gamescores: gid, uid, score I am trying to produce the following result set given a userID (uid): Opponent's displayname, my score, opponent's score, duration Any ideas? Is my design ok? How can I query these tables to get game histories for a given uid?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >