Search Results

Search found 38931 results on 1558 pages for 'database testing'.

Page 37/1558 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • "Cannot perform a differential backup for database "myDb", because a current database backup does no

    - by krimerd
    Hi there, I have what seems to be a pretty common problem when trying to take a differential backup. We have a SQL Server 2008 Standard (64bit) and we use Litespeed v 5.0.2.0 to take our backups. We take full backups once a week and a differential on a daily basis. The problem is, every time I try to take a diff backup I get the following error: "VDI open failed due to requested abort. BACKUP DATABASE is terminating abnormally. Cannot perform a differential backup for database "myDb", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option." The problem is that I know 100% I have a full backup because I just double checked. Only once I was able to take a diff backup and that was when I took it immediately after I took a full backup. I have searched around and noticed that this is pretty common (although mostly with SQL 2005) and a solution that a lot of ppl suggest and that I haven't tried yet is to disable the SQL Server VSS Writer service. The problem with this is #1 I think I might need this service since I am using a third party backup software and #2 I am not sure exactly what the service does and don't want to disable it just like that. Has any of you ever experienced this problem and how did you go about fixing it? Thank you,

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • I cannot connect to database from Drupal

    - by Patrick
    hi, I've uploaded my drupal website (and related database) to my new server. The database info is: host: localhost user: user pass: pass databaseName = database_name I've set the following line in settings.php file: $db_url = 'mysqli://user:password@localhost/database_name'; but what I get is this: If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. I guess the database is running, it always run and I can access with phpmyadmin so I think the problem is not there. The database and website files upload have also been succesfull.. so I dunno what to do to fix this issue. It is mysql on IIS Server thanks

    Read the article

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • codingBat separateThousands using regex (and unit testing how-to)

    - by polygenelubricants
    This question is a combination of regex practice and unit testing practice. Regex part I authored this problem separateThousands for personal practice: Given a number as a string, introduce commas to separate thousands. The number may contain an optional minus sign, and an optional decimal part. There will not be any superfluous leading zeroes. Here's my solution: String separateThousands(String s) { return s.replaceAll( String.format("(?:%s)|(?:%s)", "(?<=\\G\\d{3})(?=\\d)", "(?<=^-?\\d{1,3})(?=(?:\\d{3})+(?!\\d))" ), "," ); } The way it works is that it classifies two types of commas, the first, and the rest. In the above regex, the rest subpattern actually appears before the first. A match will always be zero-length, which will be replaceAll with ",". The rest basically looks behind to see if there was a match followed by 3 digits, and looks ahead to see if there's a digit. It's some sort of a chain reaction mechanism triggered by the previous match. The first basically looks behind for ^ anchor, followed by an optional minus sign, and between 1 to 3 digits. The rest of the string from that point must match triplets of digits, followed by a nondigit (which could either be $ or \.). My question for this part is: Can this regex be simplified? Can it be optimized further? Ordering rest before first is deliberate, since first is only needed once No capturing group Unit testing part As I've mentioned, I'm the author of this problem, so I'm also the one responsible for coming up with testcases for them. Here they are: INPUT, OUTPUT "1000", "1,000" "-12345", "-12,345" "-1234567890.1234567890", "-1,234,567,890.1234567890" "123.456", "123.456" ".666666", ".666666" "0", "0" "123456789", "123,456,789" "1234.5678", "1,234.5678" "-55555.55555", "-55,555.55555" "0.123456789", "0.123456789" "123456.789", "123,456.789" I haven't had much experience with industrial-strength unit testing, so I'm wondering if others can comment whether this is a good coverage, whether I've missed anything important, etc (I can always add more tests if there's a scenario I've missed).

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • Oracle Exadata?????????? ??1

    - by takashi.hitomi
    2009?7?~2010?5????Oracle Exadata???????????????? 2010?4?15? ???? ???????????? ? ??????Exadata V2??????????????????????? 2010?4?13? ???? ??????????? ? ?????????????????????????? 2010?4?6? ?????·???????·??????? ? T???????????????????????????????????? 2010?3?1? ?????? ????? ?????????????????????????????????????? 2010?2?2? ???? ????????? ? ??????????????Intel???????????????Sun Oracle Database Machine??????? 2010?1?26? ???? ????(????·???) ? ?Oracle Exadata??????????·??????????????????????? 2009?7?14? ?????????? ? ???????????HP Oracle Database Machine????

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • Are there any tools for testing drag & drop Windows desktop applications?

    - by Andrew
    I need to develop a Windows desktop application (win32 API) which will use drag & drop extensively in many formats, including my own. I need to test it, for example, with CF_TEXT dragging, CF_RTF, CF_DIB, CF_METAFILEPICT, and many others. The tool needs to have the following features: Displaying the content of DataObject dragged into it with all available format viewers. Allows preparation of a few samples of different clipboard formats together in a single DataObject, ready for dragging into my app. Allows including my own format names into the formats list of the testing tool.

    Read the article

  • Google I/O 2010 - Testing techniques for Google App Engine

    Google I/O 2010 - Testing techniques for Google App Engine Google I/O 2010 - Testing techniques for Google App Engine App Engine 201 Max Ross We typically write tests assuming that our development stack closely resembles our production stack. What if our target environment only lives in the cloud? We will highlight the key differences between typical testing techniques and Google App Engine testing techniques. We will also present concrete strategies for testing against local and cloud-based implementations of App Engine services. Finally, we will explain how to use App Engine as a highly parallel test harness that runs existing test suites without modification. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 6 1 ratings Time: 54:29 More in Science & Technology

    Read the article

  • Has Microsoft stopped offering the free Internet Explorer Application Compatibility VPC Image for IE 6 testing?

    - by Paul D. Waite
    For some time now, Microsoft has made available free, stripped-down, time-limited Virtual PC images for testing web apps in older versions of IE. The most recent version is here: http://www.microsoft.com/download/en/details.aspx?id=11575 But the XP VPC image has now expired (14th Aug 2011), meaning one can no longer test IE 6 using this method. Have Microsoft made updated XP VPC images available? If not, have they commented on the situation? Do they provide any alternative method to test web apps in IE 6? Update As noted by @PleaseStand, as of 16th Aug 2011, Microsoft has made updated images available that expire on 17th November 2011.

    Read the article

  • What is a good solution for UA testing multiple projects simultaneously?

    - by Eric Belair
    My client often has several projects/tasks going at once that sometimes need to be tested simultaneously on one website. They are often separate applications on the website, but sometimes share UDFs, etc. We currently have 3 public-facing environment websites - i.e. dev.website.com, test.website.com, and www.website.com. As the programmer, I'm trying to find a good solution to allow for UA testing of multiple projects/tasks at once. Currently, I'm finding myself switching between code branches (using SubVersion). What some of my options?

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >