Search Results

Search found 4561 results on 183 pages for 'production'.

Page 153/183 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • Javascript library: to obfuscate or not to obfuscate - that is the question

    - by morpheous
    I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves. I can accept the fact that it will be emulated over time - thats par for the course (its part of business). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. It is an established fact that no none in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here). What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision. Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator. I would like to know: How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors) Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself). What are your experiences of using obfuscated code in a production environment?

    Read the article

  • Why am I getting a ParseException when using SimpleDateFormat to format a date and then parse it?

    - by Greg
    I have been debugging some existing code for which unit tests are failing on my system, but not on colleagues' systems. The root cause is that SimpleDateFormat is throwing ParseExceptions when parsing dates that should be parseable. I created a unit test that demonstrates the code that is failing on my system: import java.text.DateFormat; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; import java.util.TimeZone; import junit.framework.TestCase; public class FormatsTest extends TestCase { public void testParse() throws ParseException { DateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss.SSS Z"); formatter.setTimeZone(TimeZone.getDefault()); formatter.setLenient(false); formatter.parse(formatter.format(new Date())); } } This test throws a ParseException on my system, but runs successfully on other systems. java.text.ParseException: Unparseable date: "20100603100243.118 -0600" at java.text.DateFormat.parse(DateFormat.java:352) at FormatsTest.testParse(FormatsTest.java:16) I have found that I can setLenient(true) and the test will succeed. The setLenient(false) is what is used in the production code that this test mimics, so I don't want to change it.

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • Close TCP connection when owner process is already killed

    - by Otiel
    I have a Windows service that - when it starts - opens some WCF services to listen on the 8000 port. It happens that this service crashes sometimes. When it does, the TCP connection is not released, thus causing my service to throw an exception if I try to start it again: AddressAlreadyInUseException: There is already a listener on IP endpoint 0.0.0.0:8000 Some observations: When running CurrPorts or netstat -ano, I can see that the 8000 port is still in use (in LISTENING state) and is owned by the Process ID XXX that corresponded to my service process ID. But my service has already crashed, and does not appear in the Task Manager anymore. Thus I can't kill the process to free the port! Of course, running taskkill /PID XXX returns: ERROR: The process "XXX" not found. When running CurrPorts or netstat -b, I can see that the process name involved in creating the listening port is System, and not as MyService.exe (whereas it is MyService.exe when my service is running). I tried to use CurrPorts to close the connection, but I always get the following error message: Failed to close one or more TCP connections. Be aware that you must run this tool as admin in order to close TCP connections. (Useless to say, I do run CurrPorts as Administrator...) TCPView is not much help either: the process name associated to the 8000 port is <non-existent>, and doing "End process" or "Close connection" has no effect. I tried to see if there was not a child process associated with the PID XXX using Process Explorer, but no luck here. If I close my service properly (before it crashes), the TCP connection is correctly released. This is normal, as I close the WCF service hosts in the OnStop() event of my service. The only way I found to release the connection is to restart the server (which is not convenient in a production environment as you can guess). Waiting does not help, the TCP connection is never released. How can I close the connection without restarting the Windows server? PS: found some questions extremely similar to mine.

    Read the article

  • Unity Framework constructor parameters in MVC

    - by ubersteve
    I have an ASP.NET MVC3 site that I want to be able to use different types of email service, depending on how busy the site is. Consider the following: public interface IEmailService { void SendEmail(MailMessage mailMessage); } public class LocalEmailService : IEmailService { public LocalEmailService() { // no setup required } public void SendEmail(MailMessage mailMessage) { // send email via local smtp server, write it to a text file, whatever } } public class BetterEmailService : IEmailService { public BetterEmailService (string smtpServer, string portNumber, string username, string password) { // initialize the object with the parameters } public void SendEmail(MailMessage mailMessage) { //actually send the email } } Whilst the site is in development, all of my controllers will send emails via the LocalEmailService; when the site is in production, they will use the BetterEmailService. My question is twofold: 1) How exactly do I pass the BetterEmailService constructor parameters? Is it something like this (from ~/Bootstrapper.cs): private static IUnityContainer BuildUnityContainer() { var container = new UnityContainer(); container.RegisterType<IEmailService, BetterEmailService>("server name", "port", "username", "password"); return container; } 2) Is there a better way of doing that - i.e. putting those keys in the web.config or another configuration file so that the site would not need to be recompiled to switch which email service it was using? Many thanks!

    Read the article

  • Unable to turn off notice errors in PHP 5.3.2

    - by knkk
    Hi everyone, I recently migrated to PHP 5.3.2, and realized that I am unable to turn off notice errors in my site now. I went to php.ini, and in these lines: ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE ...I've tried setting everything (and I restart apache each time), but I am unable to get rid of notices. The only way I'm able to get rid of notice errors is by setting : display_errors = Off That is, of course, not something I can do since I need to see errors to fix them, and I would like to see errors on the webpage that I am coding rather than log them somewhere. Can someone help? Is this a bug in PHP 5.3.2 or something I am doing wrong? Thank you very much for your time! P. S. Also, would anyone know how I can get PHP 5.3.2 to support the .php3 extension?

    Read the article

  • How to test method call order with Moq

    - by Finglas
    At the moment I have: [Test] public void DrawDrawsAllScreensInTheReverseOrderOfTheStack() { // Arrange. var screenMockOne = new Mock<IScreen>(); var screenMockTwo = new Mock<IScreen>(); var screens = new List<IScreen>(); screens.Add(screenMockOne.Object); screens.Add(screenMockTwo.Object); var stackOfScreensMock = new Mock<IScreenStack>(); stackOfScreensMock.Setup(s => s.ToArray()).Returns(screens.ToArray()); var screenManager = new ScreenManager(stackOfScreensMock.Object); // Act. screenManager.Draw(new Mock<GameTime>().Object); // Assert. screenMockOne.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock one"); screenMockTwo.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock two"); } But the order in which I draw my objects in the production code does not matter. I could do one first, or two it doesn't matter. However it should matter as the draw order is important. How do you (using Moq) ensure methods are called in a certain order? Edit I got rid of that test. The draw method has been removed from my unit tests. I'll just have to manually test it works. The reversing of the order though was taken into a seperate test class where it was tested so it's not all bad. Thanks for the link about the feature they are looking into. I sure hope it gets added soon, very handy.

    Read the article

  • Max Daily Budget exceeded and Billing Status "Changing Daily Budget"

    - by draftpik
    We've exceeded the Max Daily Budget for our app, but we can't increase the budget due to a serious flaw in Google's billing system. Google App Engine and Google Wallet do not have very capable support for multiple sign-in. As a result, I went to change the budget, but it used the wrong Google Wallet account (a different Google Account I was signed in as). I had to go back and try again, but now our GAE app shows the following status: Billing Status: Changing Daily Budget Your account has been locked while we process your budget changes. If you were redirected to Google Checkout but did not complete the process, your settings will remain unchanged. (You will be able to make changes to your budget settings again once the outstanding payment is processed.) Now I'm completely prevented from making any billing changes, our app is shut off (over quota), and I have NOTHING I can do to fix it. This is a seriously fundamental flaw in App Engine's billing system and Google Wallet integration. Has anyone run into this before? Is there a workaround anyone is aware of? Right now, our production app is completely down thanks to this issue. Any help you can offer would be greatly appreciated? If you're from Google and you might be able to help on the backend, our app id is "nhldraftpik". Thanks! Brian

    Read the article

  • How are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Strange build issue using Flex Mojo. Looking for troubleshooting suggestions.

    - by WeeJavaDude
    I have ran into a strange issue and I was hoping for some suggeestion on how to attack the problem. Here is the environment. 1) We develop locally using Flex Builder. 2) We use QuickBuild with FlexMojo 3.4.2 for test builds and production 3) In both cases we don't believe optimization is enabled. What we are seeing is some strange behavior relating to the Ctrl-Enter key when doing testing on IE only in our test environment but not locally. By copying some files over locally I have narrowed the issue down to swf files differences. We do see a difference in the size of swf files in our test environment vs our local environments. Couple things that would help me in troubleshooting would be. 1) Is there a way to know what exactly is in the SWF file? What SWCs are included. 2) How does one compare compile settings between a maven mojo configuration and Flex IDE envioronment? Any thoughts or opinions would be very helpful.

    Read the article

  • Manually filling opcode cache for entire app using apc_compile_file, then switching to new release.

    - by Ben
    Does anyone have a great system, or any ideas, for doing as the title says? I want to switch production version of web app-- written in PHP and served by Apache-- from release 1234 to release 1235, but before that happens, have all files already in the opcode cache (APC). Then after the switch, remove the old cache entries for files from release 1234. As far as I can think of there are three easy ways of atomically switching from one version to the next. Have a symbolic link, for example /live, that is always the document root but is changed to point from one version to the next. Similarly, have a directory /live that is always the document root, but use mv live oldversion && mv newversion live to switch to new version. Edit apache configuration to change the document root to newversion, then restart apache. I think it is preferable not to have to do 3, but I can't think of anyway to precompile all php files AND use 1 or 2 to switch release. So can someone either convince me its okay to rely on option 3, or tell me how to work with 1 or 2, or reveal some other option I am not thinking of?

    Read the article

  • Set "Start With" value for Oracle sequence dynamically

    - by Allan
    I'm trying to create a release script that can be deployed on multiple databases, but where the data can be merged back together at a later date. The obvious way to handle this is to set the sequence numbers for production data sufficiently high in subsequent deployments to prevent collisions. The problem is in coming up with a release script that will accept the environment number and set the "Start With" value of the sequences appropriately. Ideally, I'd like to use something like this: ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] CREATE SEQUENCE seq1 START WITH &EnvironNum*100000; --[more scripting] This doesn't work because you can't evaluate a numeric expression in DDL. Another option is to create the sequences using dynamic SQL via PL/SQL. ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] EXEC execute immediate 'CREATE SEQUENCE seq1 START WITH ' || &EnvironNum*100000; --[more scripting] However, I'd prefer to avoid this solution as I generally try to avoid issuing DDL in PL/SQL. Finally, the third option I've come up with is simply to accept the Start With value as a substitution variable, instead of the environment number. Does anyone have a better thought on how to go about this?

    Read the article

  • "You have already activated" message even when using bundle exec

    - by juanpastas
    I am installing gems in my Gemfile in shared path as Capistrano does by default, and when I run: bundle exec rake assets:precompile RAILS_ENV=production I get: You have already activated rake 0.9.2.2, but your Gemfile requires rake 10.0.4. Using bundle exec may solve this. See that: cat Gemfile.lock | grep rake returns: rake (>= 0.8.7) rake (10.0.4) This is my gem environment output: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.9.3 (2013-06-27 patchlevel 448) [x86_64-linux] - INSTALLATION DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - RUBY EXECUTABLE: /opt/bitnami/ruby/bin/ruby - EXECUTABLE DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gemhome" => "/home/bitnami/my_app/shared/bundle/ruby/1.9.1/" - "gempath" => ["/home/bitnami/my_app/shared/bundle/ruby/1.9.1/"] - REMOTE SOURCES: - http://rubygems.org/ Update which -a rake /opt/bitnami/rvm/bin/rake /opt/bitnami/ruby/bin/rake Update 2 I tried giving full path to rake, but same problem

    Read the article

  • PropertyPlaceholderConfigurer vs Filters -- Spring Beans

    - by John
    Hi there. I've got a question regarding the difference between PropertyPlaceholderConfigurer (org.springframework.beans.factory.config.PropertyPlaceholderConfigurer) and normal filters defined in my pom.xml. I've been looking at examples, and it seems that even though filters are defined and marked to be active by default in the pom.xml they still make use of PropertyPlaceholderConfigurer in Spring's applicationContext.xml. This means that the pom.xml has a reference to a filter-LOCAL.properties while applicationContext.xml has a reference to application.properties and they both contain the same settings. Why is that? Is that how it is supposed to be done? I'm able to run the goal mvn jetty:run without the application.properties present, but if I add settings to the application.properties that differ from the filter-LOCAL.properties they don't seem to override. Here's an example of what I mean: pom.xml <profiles <profile <idLOCAL <activation <activeByDefaulttrue </activation <properties <envLOCAL </properties </profile </profiles applicationContext.xml <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" <property name="locations" <list <valueclasspath:application.properties </list </property <property name="ignoreResourceNotFound" value="true"/ </bean <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" <property name="driverClassName" value="${jdbc.driver}"/ <property name="url" value="${jdbc.url}"/ <property name="username" value="${jdbc.username}"/ <property name="password" value="${jdbc.password}"/ </bean an example of the content of application.properties and filters-LOCAL.properties jdbc.driver=org.postgresql.Driver jdbc.url=jdbc:postgresql://localhost/shoutbox_dev jdbc.username=tester jdbc.password=tester Can I remove the propertyConfigurer from the applicationContext, create a PROD filter and disregard the application.properties file, or will that give me issues when deploying to the production server?

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Servlet send image from server and save in client

    - by sangi
    Hi, I'm new and just developing on J2EE. I am modifying an existing application (an OpenSource project). I need to save an image on a client sent by the server, but I do not know how. This activity must be done in a transparent manner without affecting the existing operation of the application. From the tests done I get this error: java.lang.IllegalStateException: getWriter () has Already Been Called for this response. How should carry out this task, according to your own opinion? How do I save on the client, locally, the image? Update: Thanks for the answers. My problem is that: the image is generated on the server, but not for direct client request (there is no link to click on web page), the picture is composed using other services on the Internet. reconstruct the image on the server. This image must be sent to the client to be saved locally. so I'd like it to appear a window where you assign the destination image plus I'd like the rest of the application were not affected by this activity. The application is yet on production. Thank you very much for your response.

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Is there a practical benefit to casting a NULL pointer to an object and calling one of its member fu

    - by zdawg
    Ok, so I know that technically this is undefined behavior, but nonetheless, I've seen this more than once in production code. And please correct me if I'm wrong, but I've also heard that some people use this "feature" as a somewhat legitimate substitute of a lacking aspect of the current C++ standard, namely, the inability to obtain the address (well, offset really) of a member function. For example, this is out of a popular implementation of a PCRE (Perl-compatible Regular Expression) library: #ifndef offsetof #define offsetof(p_type,field) ((size_t)&(((p_type *)0)->field)) #endif One can debate whether the exploitation of such a language subtlety in a case like this is valid or not, or even necessary, but I've also seen it used like this: struct Result { void stat() { if(this) // do something... else // do something else... } }; // ...somewhere else in the code... ((Result*)0)->stat(); This works just fine! It avoids a null pointer dereference by testing for the existence of this, and it does not try to access class members in the else block. So long as these guards are in place, it's legitimate code, right? So the question remains: Is there a practical use case, where one would benefit from using such a construct? I'm especially concerned about the second case, since the first case is more of a workaround for a language limitation. Or is it? PS. Sorry about the C-style casts, unfortunately people still prefer to type less if they can.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >