Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 72/319 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Storing an object to use in multiple classes

    - by Aaron Sanders
    I am wondering the best way to store an object in memory that is used in a lot of classes throughout an application. Let me set up my problem for you: We have multiple databases, 1 per customer. We also have a master table and each row is detailed information about the databases such as database name, server IP it's located and a few config settings. I have an application that loops through those multiple databases and runs some updates on them. The settings I mentioned above are updated each loop iteration into memory. The application then runs through series of processes that include multiple classes using this data. The data never changes during the processes, only during the loop iteration. The variables are related to a customer, so I have them stored in a customer class. I suppose I could make all of the members shared or should I use a singleton for the customer class? I've never actually used a singleton, only read they are good in this type of situation. Are there better solutions to this type of scenario? Also, I could have plans for this application to be multithreaded later. Sorry if this is confusing. If you have questions, let me know and I will answer them. Thanks for your help.

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • Please help me with a Git workflow

    - by aaron carlino
    I'm an SVN user hoping to move to Git. I've been reading documentation and tutorials all day, and I still have unanswered questions. I don't know if this workflow will make sense, but here's my situation, and what I would like to get out of my workflow: Multiple developers, all developing locally on their work stations 3 versions of the website: Dev, Staging, Production Here's my dream: A developer works locally on his own branch, say "developer1", tests on his local machine, and commits his changes. Another developer can pull down those changes into his own branch. Merge developer1 - developer2. When the work is ready to be seen by the public, I'd like to be able to "push" to Dev, Staging, or Production. git push origin staging or maybe.. git merge developer1 staging I'm not sure. Like I said, I'm still new to it. Here are my main questions: -Do my websites (Dev, Staging, Production) have to be repositories? And do they have to be "bare" in order to be the recipients of new changes? -Do I want one repository or many, with several branches? -Does this even make sense, or am I on the wrong path? I've read a lot of tutorials, so I'm really hoping someone can just help me out with my specific situation. Thanks so much!

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • Rails NoMethodError in loop when method exists

    - by Kevin Whitaker
    Good day all. I'm running into a bit of a problem getting a script running on my production environment, even though it works just fine on my dev box. I've verified that all the requisite gems and such are the same version. I should mention that the script is intended to be run with the script/runner command. Here is a super-condensed version of what I'm trying to do, centered around the part that's broken: def currentDeal marketTime = self.convertToTimeZone(Time.new) deal = Deal.find(:first, :conditions = ["start_time ? AND market_id = ? AND published = ?", marketTime, marketTime, self.id, 1]) return deal end markets = Market.find(all) markets.each do |market| deal = market.currentDeal puts deal.subject end Now convertToTimeZone is a method attached to the model. So, this code works just fine on my dev machine, as stated. However, attempting to run it on my production machine results in: undefined method `subject' for nil:NilClass (NoMethodError) If, however, I go into the console on the production box and do this: def currentDeal marketTime = self.convertToTimeZone(Time.new) deal = Deal.find(:first, :conditions = ["start_time ? AND market_id = ? AND published = ?", marketTime, marketTime, self.id, 1]) return deal end market = Market.find(1) deal = market.currentDeal puts deal.subject It returns the correct value, no problem. So what is going on? This is on rails v 2.3.5, on both machines. Thanks for any help

    Read the article

  • Rails with passenger only runs in development

    - by Ignace
    Hey, I have a problem on one of our webservers. I'll try to explain it as clear as possible, but I'm not 100% aware of all the configuration of the server. There are 2 sites running next to eachother (blcc_preprod and blcc_prod), so in the 'sites-enabled' of apache this i have a file 'blcc' like this: <VirtualHost *:80> DocumentRoot /opt/dn/blcc/www RailsBaseURI /blcc_preprod RailsBaseURI /blcc_prod RailsEnv production </VirtualHost> My config/environments/production.log (from both) looks like this(I removed all comments, because it messes with the layout) config.cache_classes = true config.action_controller.consider_all_requests_local = false config.action_controller.perform_caching = true config.action_view.cache_template_loading = true config.log_level = :debug The weird thing is, that my production log dates from months ago, so something is really wrong. Could someone help? If you need more info, just ask. Thanks! Edit: Error.log from apache show the normal output for a event to the server (the situation here is that the webserver plugs in to another business (java) server via a framework) Access.log is empty Content from other_vhosts_access.log after we surf to ip/blcc_preprod is the following blcc.localdomain:80 192.168.21.194 - - [25/May/2010:08:33:04 +0200] "GET/ blcc_preprod HTTP/1.1" 500 594 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"

    Read the article

  • SQL Server performance issue.

    - by Jit
    Hi Friends, I have been trying to analyze performance issue with SQL Server 2005. We have 30 jobs, one for each databases (30 databases, one per each client). The jobs run at early morning at an interval of 5 minutes. When I run the job individually for testing, for most of the databases it finishes in 7 to 9 minutes. But when these jobs run at early morning, I see few jobs taking 2 to 3 hours to finish and the same takes few minutes as mentioned above if ran independently. We dont have any other job scheduled during that time, other than these 30 jobs. If we restart the server then for 2 or so days all the jobs finishes in few minutes, but over the period of time (from 3rd day suddenly), few jobs start taking hours to finish. What could be the possible reason of performance degradation over the period of time? I verified all the SPs and we uses temp tables and I made sure none of the temp table is left without dropping at the end of SP. Let me know what are the possible reasons for such behavior. Appreciate your time and help. Thanks

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Identify server that made call to web service

    - by sleepybobos
    I am working within an intranet environment. We have both a production and development sharepoint server (WSS 3). We have a 3rd party workflow product which runs on top of sharepoint. It is installed on both the production and development sharepoint servers. The workflow product can call web services I have written which are hosted on our web server. How would I have the web services determine which sharepoint server made the call to the web service, be it the production or development server? I would then use this information to server specific information from web.config or database etc. Currently the site hosting web services is setup to allow anonymous access so code such as System.Web.HttpContext.Current.User.Identity.Name; returns and empty string. If windows authenticaion is used it returns the identity of the currently logged in user, which is no user in identifying the server the call was made from. I need a push in the right direction to address what I believe is probably a common scenario please.

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Can't diagnose my MySQL root user problem

    - by George Crawford
    Hi all, I have a problem with the MySQL root user in My MySQL setup, and I just can't for the life of me work out how to fix it. It seems that I have somehow messed up the root user, and my access to databases is now very erratic. For reference, I'm using MAMP on OS X to provide the MySQL server. I'm not sure how much that matters though - I'd guess that whatever I've done will require a command-line fix to solve it. I can start MySQL using MAMP as usual, and access databases using the 'standard' users I have created for my PHP apps. However, the root user, which I use in my MySQL GUI client, and also in phpMyAdmin, can only access the "information_schema" database, as well as two I have created manually, and presumably (and mistakenly) left permissions wide open for. My 15 or so other databases cannot be accessed my the root user. When I load up phpMyAdmin, the home screen says: "Create new database: No Privileges". I certainly did at some stage change my root user's password using the MAMP dialog. But I don't remember if I did anything else which might have caused this problem. I've tried changing the password again, and there seems to be no change in the issue. I've also tried resetting root password using the command line, including starting mysql manually with --skip-grant-tables then flushing privs, but again, nothing seems to fix the issue. I've come to the end of my ideas, and would very much appreciate some step-by-step advice and diagnosis from one of the experts here! Many thanks for your help.

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • Identical files from different servers. Why might IE 8 display them differently?

    - by jasongetsdown
    I'm working on a site that will go on my company's intranet. I developed it locally on my computer, checking it in different browsers and on colleague's computers, and when it was done I handed it off to IT. They put identical copies on a staging server, and on the production server. This is a site built only with html, javascript, and css. No server side scripting. It also uses a DWF viewer plugin from Autodesk. It is a single standalone page (not part of a CMS) that allows users to load drawings into the viewer and then click to see info from a database of space info saved in a series of js arrays (the space DB software spits out a js file with all the info listed in array literals, creating a crap ton of global variables - ugh, but I digress). When I followed their links (using IE 8) the version on the staging server looked as expected, but the layout is hosed on the version from the production server. Specifically, it seems like a div that is supposed to flow to the right of a div that is float: left is displaying below the floated div at full width, as though it was clear: left (which it is not). It also has the wrong height. I downloaded the files from each and they are identical to my local version. Frustrated, I cleared my browser's cache, restarted my computer, checked it on a colleague's computer who also has IE 8. All the same issue. Staging server good. Production server bad. Finally I uninstalled IE 8 and looked at it in IE 6. Both versions looked fine. So, to recap. Two different servers. No server side scripting. Identical files. One browser agrees they are identical, the other does not. What could cause this?

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

  • Android CursorAdapters, ListViews and background threads

    - by MattC
    This application I've been working on has databases with multiple megabytes of data to sift through. A lot of the activities are just ListViews descending through various levels of data within the databases until we reach "documents", which is just HTML to be pulled from the DB(s) and displayed on the phone. The issue I am having is that some of these activities need to have the ability to search through the databases by capturing keystrokes and re-running the query with a "like %blah%" in it. This works reasonably quickly except when the user is first loading the data and when the user first enters a keystroke. I am using a ResourceCursorAdapter and I am generating the cursor in a background thread, but in order to do a listAdapter.changeCursor(), I have to use a Handler to post it to the main UI thread. This particular call is then freezing the UI thread just long enough to bring up the dreaded ANR dialog. I'm curious how I can offload this to a background thread totally so the user interface remains responsive and we don't have ANR dialogs popping up. Just for full disclosure, I was originally returning an ArrayList of custom model objects and using an ArrayAdapter, but (understandably) the customer pointed out it was bad memory-manangement and I wasn't happy with the performance anyways. I'd really like to avoid a solution where I'm generating huge lists of objects and then doing a listAdapter.notifyDataSetChanged/Invalidated() Here is the code in question: private Runnable filterDrugListRunnable = new Runnable() { public void run() { if (filterLock.tryLock() == false) return; cur = ActivityUtils.getIndexItemCursor(DrugListActivity.this); if (cur == null || forceRefresh == true) { cur = docDb.getItemCursor(selectedIndex.getIndexId(), filter); ActivityUtils.setIndexItemCursor(DrugListActivity.this, cur); forceRefresh = false; } updateHandler.post(new Runnable() { public void run() { listAdapter.changeCursor(cur); } }); filterLock.unlock(); updateHandler.post(hideProgressRunnable); updateHandler.post(updateListRunnable); } };

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • WCF hosting: Can access svc file but cannot go to wsdl link

    - by Impulse
    Hello, I have a WCF service that is hosted in IIS 7.5. I have two servers, one for test and one for production. The service works fine on test server, but on the production server I have the following error. When I access the address http//..../service.svc I can see the default page that says: You have created a service. To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax: svcutil.exe http://..../service.svc?wsdl This will generate a configuration file and a code file that contains the client class. Add the two files to your client application and use the generated client class to call the Service. But when I click the wsdl link, I cannot go to the wsdl page. It returns me to this default web page without any errors. I am suspecting a network/firewall authorization error but does anybody have an experience like this one? All IIS settings are the same for test and production servers. Thank you, Best Regards.

    Read the article

  • Best workflow with Git & Github

    - by Tom Schlick
    Hey guys, im looking for some advice on how to properly structure the workflow for my team with git & github. we are recent svn converts and its kind of confusing on how we should best setup our day-to-day workflow. Here is a little background, im comfortable with command line and my team is pretty new to it but can follow use commands. We all are working on the same project with 3 environments (development, staging, and production). We are a mix of developers & designers so some use the Git GUI and some command line. Our setup in svn went something like this. We had a branch for development, staging and production. When people were confident with code they would commit and then merge it into the staging. The server would update itself and on a release day (weekly) we would do a diff and push the changes to the production server. Now i setup those branches and got the process with the server running but its the actual workflow that is confusing the hell out of me. It seems like overkill that every time someone makes a change on a file they would create a new branch, commit, merge, and delete that branch... from what i have read they would be able to do it on a specific commit (using the hash), do i have that right? is this an acceptable way to go about things with git? any advice would be greatly appreciated.

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • How to further improve error messages in Scala parser-combinator based parsers?

    - by rse
    I've coded a parser based on Scala parser combinators: class SxmlParser extends RegexParsers with ImplicitConversions with PackratParsers { [...] lazy val document: PackratParser[AstNodeDocument] = ((procinst | element | comment | cdata | whitespace | text)*) ^^ { AstNodeDocument(_) } [...] } object SxmlParser { def parse(text: String): AstNodeDocument = { var ast = AstNodeDocument() val parser = new SxmlParser() val result = parser.parseAll(parser.document, new CharArrayReader(text.toArray)) result match { case parser.Success(x, _) => ast = x case parser.NoSuccess(err, next) => { tool.die("failed to parse SXML input " + "(line " + next.pos.line + ", column " + next.pos.column + "):\n" + err + "\n" + next.pos.longString) } } ast } } Usually the resulting parsing error messages are rather nice. But sometimes it becomes just sxml: ERROR: failed to parse SXML input (line 32, column 1): `"' expected but `' found ^ This happens if a quote characters is not closed and the parser reaches the EOT. What I would like to see here is (1) what production the parser was in when it expected the '"' (I've multiple ones) and (2) where in the input this production started parsing (which is an indicator where the opening quote is in the input). Does anybody know how I can improve the error messages and include more information about the actual internal parsing state when the error happens (perhaps something like a production rule stacktrace or whatever can be given reasonably here to better identify the error location). BTW, the above "line 32, column 1" is actually the EOT position and hence of no use here, of course.

    Read the article

  • Null reference but it's not?

    - by Clint Davis
    This is related to my previous question but it is a different problem. I have two classes: Server and Database. Public Class Server Private _name As String Public Property Name() As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _databases As List(Of Database) Public Property Databases() As List(Of Database) Get Return _databases End Get Set(ByVal value As List(Of Database)) _databases = value End Set End Property Public Sub LoadTables() Dim db As New Database(Me) db.Name = "test" Databases.Add(db) End Sub End Class Public Class Database Private _server As Server Private _name As String Public Property Name() As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Public Property Server() As Server Get Return _server End Get Set(ByVal value As Server) _server = value End Set End Property Public Sub New(ByVal ser As Server) Server = ser End Sub End Class Fairly simple. I use like this: Dim s As New Server s.Name = "Test" s.LoadTables() The problem is in the LoadTables in the Server class. When it hits Databases.Add(db) it gives me a NullReference error (Object reference not set). I don't understand how it is getting that, all the objects are set. Any ideas? Thanks.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >