Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 25/189 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Synchronizing MySQL databases

    - by Wasim
    Hi all, I have a maintanance problem synchronizing my MySQL data bases . These are the databases I have : My development DB : Here I make my curret development changes . Staging DB : I need to make all the changes I did in the development on it before using , currently I hold migration scripts for structure and data. Production DB : A production environment . Have to do exacly the same as the staging . My problem , is sync. the structure , and some of the data. This is really a very hard work to maintaine. Is there any technics , tools to do with MySQL . What is replication , is it good for my situation , how to use it . Thanks in advance ...

    Read the article

  • Fluent NHibernate + multiple databases

    - by Pablote
    My project needs to handle three databases, that means three session factories. The thing is if i do something like this with fluent nhibernate: .Mappings(m = m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())) the factories would pick up all the mappings, even the ones that correspond to another database I've seen that when using automapping you can do something like this, and filter by namespace: .Mappings(m = m.AutoMappings.Add(AutoMap.AssemblyOf().Where(t = t.Namespace == "Storefront.Entities"))) I havent found anything like this for fluent mappings, is it possible?? The only solutions I can think of are: either create separate assemblies for each db mapping classes or explicitly adding each of the entities to the factory configuration. I would prefer to avoid both, if possible. Thanks.

    Read the article

  • SubSonic 3.0 two different Databases namespace Issue

    - by Roswell
    My question is simple, I have two databases with the same scehema, one is live database say DLive and other one is testbed say DTestBed.... However, I want to use the same database namespace for both database. How can I achieve that without changing namespace in my code all over? Sometimes you need to do builds for live and testbeds in the same day ! Its really hard to change big project namespaces everytime you build. How can I just change the webconfig connection string and get it done? Thanks,

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • Dynamically changing databases in SQL Server 2000

    - by spuppett
    At work we have a number of databases that we need to do the same operations on. I would like to write 1 SP that would loop over operations and set the database at the beginning of the loop (example to follow). I've tried sp_executesql('USE ' + @db_id) but that only sets the DB for the scope of that stored procedure. I don't really want to loop with hard coded database names because we need to do similar things in many different places and it's tough to remember where things need to change if we add another DB. Any thoughts Example: DECLARE zdb_loop CURSOR FAST_FORWARD FOR SELECT distinct db_id from DBS order by db_id OPEN zdb_loop FETCH NEXT FROM zdb_loop INTO @db_id WHILE @@FETCH_STATUS = 0 BEGIN USE @db_id --Do stuff against 3 or 4 different DBs FETCH NEXT FROM zdb_loop INTO @db_id END CLOSE zdb_loop DEALLOCATE zdb_loop

    Read the article

  • WebSQL Databases between two different pages?

    - by Srikanth Rayabhagi
    Is there any particular way in which we can access the Web SQL Database of one page, by other page. To make it more clear, suppose a.com creates the database DB and store some info. Now b.com comes and want to access the same database DB. Is there a way? Or are there any alternatives to do this? I tried to implement in HTML5 and Javascript but the databases as well as the localStorages are confining only to particular pages, I want this to be cross the pages.

    Read the article

  • .NET project: unified wrapper for object databases.

    - by Steve
    I am considering doing a project which would provide unified API and tools (import/export, etc.) for object databases (e.g. Caché, Objectivity) for .NET. It would provide: schema generation from CLR classes, generation of C# classes from given OODBMs schema, API for deleting, creating and updating objects, Linq provider, API for calling object's methods on DB server, some of OODBMs provide some kind of SQL support, so API for this, providers for Caché and Objectivity in first phase. Does any project which implements any of above exist? Can this be achieved with NHibernate dialects? or are OODBMs so different than RDBMs that it worth doing separate framework for them?

    Read the article

  • Oracle: Difference in execution plans between databases

    - by Will
    Hello, I am comparing queries my development and production database. They are both Oracle 9i, but almost every single query has a completely different execution plan depending on the database. All tables/indexes are the same, but the dev database has about 1/10th the rows for each table. On production, the query execution plan it picks for most queries is different from development, and the cost is somtimes 1000x higher. Queries on production also seem to be not using the correct indexes for queries in some cases (full table access). I have ran dbms_utility.analyze schema on both databases recently as well in the hopes the CBO would figure something out. Is there some other underlying oracle configuration that could be causing this? I am a developer mostly so this kind of DBA analysis is fairly confusing at first..

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • VSDBCMD deployment for additions to third party databases

    - by Sam
    We have some custom objects (stored procedures etc) in an SQL Server 2005 database belonging to an ERP system. The custom objects are in different schemas to the ERP objects. We're using Database Edition .dbproj projects and vsdbcmd deployment for all our custom application databases and would like to similarly manage our custom objects in the ERP database. It's not clear how this can be done without either: Importing all ERP objects (~4000 tables) into the .dbproj and manually keeping them in sync with ERP development. Visual Studio fell over the only time I tried importing these, so I've no idea whether it can actually handle a project of this size. Somehow excluding the ERP schemas (there are two) from the diff process to ensure they don't get dropped by vsdbcmd. I haven't found any documentation which suggests this is possible. I'm aware of the IgnoreDefaultSchema setting, but there are two schemas I need to ignore and I'm not comfortable with the 'default schema' approach - deployment by different users could be disasterous. Has anyone managed to successfully use .dbproj & vsdbcmd for custom additions to a third party database? If not, how do you manage SQL source control & deployment?

    Read the article

  • php - disconnecting and connecting to multiple databases

    - by Phil Jackson
    Hi, I want to be able to switch from the current db to multiple dbs though a loop: $query = mysql_query("SELECT * FROM `linkedin` ORDER BY id", $CON ) or die( mysql_error() ); if( mysql_num_rows( $query ) != 0 ) { $last_update = time() / 60; while( $rows = mysql_fetch_array( $query ) ) { $contacts_db = "NNJN_" . $rows['email']; // switch to the contacts db mysql_select_db( $contacts_db, $CON ); $query = mysql_query("SELECT * FROM `linkedin` WHERE token = '" . TOKEN . "'", $CON ) or die( mysql_error() ); if( mysql_num_rows( $query ) != 0 ) { mysql_query("UPDATE `linkedin` SET last_update = '{$last_update}' WHERE token = '" . TOKEN . "'", $CON ) or die( mysql_error() ); }else{ mysql_query("INSERT INTO `linkedin` (email, token, username, online, away, last_update) VALUES ('" . EMAIL . "', '" . TOKEN . "', '" . USERNAME . "', 'true', 'false', '$last_update')", $CON ) or die( mysql_error() ); } } mysql_free_result( $query ); } // switch back to your own mysql_select_db( USER_DB, $CON ); It does insert and update details from the other databases but it also inserts and edits data from the current users database which I dont want. Any ideas?

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • Moving information between databases

    - by williamjones
    I'm on Postgres, and have two databases on the same machine, and I'd like to move some data from database Source to database Dest. Database Source: Table User has a primary key Table Comments has a primary key Table UserComments is a join table with two foreign keys for User and Comments Dest looks just like Source in structure, but already has information in User and Comments tables that needs to be retained. I'm thinking I'll probably have to do this in a few steps. Step 1, I would dump Source using the Postgres Copy command. Step 2, In Dest I would add a temporary second_key column to both User and Comments, and a new SecondUserComments join table. Step 3, I would import the dumped file into Dest using Copy again, with the keys input into the second_key columns. Step 4, I would add rows to UserComments in Dest based on the contents of SecondUserComments, only using the real primary keys this time. Could this be done with a SQL command or would I need a script? Step 5, delete the SecondUserComments table and remove the second_key columns. Does this sound like the best way to do this, or is there a better way I'm overlooking?

    Read the article

  • Fetch the most viewed data in databases

    - by Erik Edgren
    I want to get the most viewed photo from the database but I don't know how I shall accomplish this. Here's my SQL at the moment: SELECT * FROM photos AS p, viewers AS v WHERE p.id = v.id_photo GROUP BY v.id_photo The databases: CREATE TABLE IF NOT EXISTS `photos` ( `id` int(10) NOT NULL AUTO_INCREMENT, `photo_filename` varchar(50) NOT NULL, `photo_camera` varchar(150) NOT NULL, `photo_taken` datetime NOT NULL, `photo_resolution` varchar(10) NOT NULL, `photo_exposure` varchar(10) NOT NULL, `photo_iso` varchar(3) NOT NULL, `photo_fnumber` varchar(10) NOT NULL, `photo_focallength` varchar(10) NOT NULL, `post_coordinates` text NOT NULL, `post_description` text NOT NULL, `post_uploaded` datetime NOT NULL, `post_edited` datetime NOT NULL, `checkbox_approxcoor` enum('0','1') NOT NULL DEFAULT '0', PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) CREATE TABLE IF NOT EXISTS `viewers` ( `id` int(10) NOT NULL AUTO_INCREMENT, `id_photo` int(10) DEFAULT '0', `ipaddress` text NOT NULL, `date_viewed` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) The data in viewers looks like this: (1, 85, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'), (2, 84, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'), (3, 85, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'); One single row from the database for photos to understand how the rows looks like in this database: (85, 'P1170986.JPG', 'Panasonic DMC-LX3', '2012-06-19 18:00:40', '3968x2232', '10/8000', '80', '50/10', '51/10', '', '', '2012-06-19 18:45:17', '0000-00-00 00:00:00', '0') At the moment the SQL only prints the photo with ID 84. In this case it's wrong - it should print out the photo with ID 85. How can I fix this problem? Thanks in advance.

    Read the article

  • Best Practices for Sanitizing SQL inputs Using JavaScript?

    - by Greg Bulmash
    So, with HTML5 giving us local SQL databases on the client side, if you want to write a select or insert, you no longer have the ability to sanitize third party input by saying $buddski = mysql_real_escape_string($tuddski) because the PHP parser and MySQL bridge are far away. It's a whole new world of SQLite where you compose your queries and parse your results with JavaScript. But while you may not have your whole site's database go down, the user who gets his/her database corrupted or wiped due to a malicious injection attack is going to be rather upset. So, what's the best way, in pure JavaScript, to escape/sanitize your inputs so they will not wreak havoc with your user's built-in database? Scriptlets? specifications? Anyone?

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Access Violation Using memcpy or Assignment to an Array in a Struct

    - by Synetech inc.
    Hi, I wrote a program last night that worked just fine but when I refactored it today to make it more extensible, I ended up with a problem. The original version had a hard-coded array of bytes. After some processing, some bytes were written into the array and then some more processing was done. To avoid hard-coding the pattern, I put the array in a structure so that I could add some related data and create an array of them. However now, I cannot write to the array in the structure. Here’s a pseudo-code example: main() { char pattern[]="\x32\x33\x12\x13\xba\xbb"; PrintData(pattern); pattern[2]='\x65'; PrintData(pattern); } That one works but this one does not: struct ENTRY { char* pattern; int somenum; }; main() { ENTRY Entries[] = { {"\x32\x33\x12\x13\xba\xbb\x9a\xbc", 44} , {"\x12\x34\x56\x78", 555} }; PrintData(Entries[0].pattern); Entries[0].pattern[2]='\x65'; //0xC0000005 exception!!! :( PrintData(Entries[0].pattern); } The second version causes an access violation exception on the assignment. I’m sure it’s because the second version allocates memory differently, but I’m starting to get a headache trying to figure out what’s what or how to get fix this. (I’m currently working around it by dynamically allocating a buffer of the same size as the pattern array, copying the pattern to the new buffer, making the changes to the buffer, using the buffer in the place of the pattern array, and then trying to remember to free the—temporary—buffer.) (Specifically, the original version cast the pattern array—+offset—to a DWORD* and assigned a DWORD constant to it to overwrite the four target bytes. The new version cannot do that since the length of the source is unknown—may not be four bytes—so it uses memcpy instead. I’ve checked and re-checked and have made sure that the pointers to memcpy are correct, but I still get an access violation. I use memcpy instead of str(n)cpy because I am using plain chars (as an array of bytes), not Unicode chars and ignoring the null-terminator. Using an assignment as above causes the same problem.) Any ideas? Thanks a lot.

    Read the article

  • How to correctly refactor this?

    - by kane77
    I have trouble finding way to correctly refactor this code so that there would be as little duplicate code as possible, I have a couple of methods like this (pseudocode): public List<Something> parseSomething(Node n){ List<Something> somethings = new ArrayList<Something>(); initialize(); sameCodeInBothClasses(); List<Node> nodes = getChildrenByName(n, "somename"); for(Node n:nodes){ method(); actionA(); somethings.add(new Something(actionB()); } return somethings; } methods sameCodeInBothClasses() are same in all classes but where it differs is what hapens in for loop actionA() and it also adds an element to the List of different type. Should I use Strategy pattern for the different parts inside loop? What about the return value (The type of list differs), should the method return just List<Object> that I would then cast to appropriate type? Should I pass the class I want to return as parameter?

    Read the article

  • Is TDD broken in Python?

    - by Konstantin
    Hi! Assume we have a class UserService with attribute current_user. Suppose it is used in AppService class. We have AppService covered with tests. In test setup we stub out current_user with some mock value: UserService.current_user = 'TestUser' Assume we decide to rename current_user to active_user. We rename it in UserService but forget to make change to its usage in AppService. We run tests and they pass! Test setup adds attribute current_user which is still (wrongly but successfully) used in AppService. Now our tests are useless. They pass but application will fail in production. We can't rely on our test suite == TDD is not possible. Is TDD broken in Python?

    Read the article

  • DevExpress Refactor Pro vs JetBrains ReSharper

    - by ChrisV
    In my department, we are currently using ReSharper 4.0 and deciding whether to upgrade to 4.5 upon its release next week. I personally am a huge fan of ReSharper however a number of my colleagues have pointed out that they have been using a plug in from DevExpress called Refactor Pro that performs similar functionality. http://www.devexpress.com/Refactor http://www.jetbrains.com/resharper/beta.html Has anyone previously compared these tools and hold any strong views on which tool would give us the greatest increase in productivity and why?

    Read the article

  • Cut and Paste Code Reuse - JavaScript and C#

    - by tyndall
    What is the best tool(s) for tracking down "cut and paste reuse" of code in JavaScript and C#? I've inherited a really big project and the amount of code that is repeated throughout the app is 'epic'. I need some help getting handle on all the stuff that can be refactored to base classes, reusable js libs, etc... If it can plug into Visual Studio 2010, that would be an added bonus.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >