Search Results

Search found 2636 results on 106 pages for 'transaction isolation'.

Page 42/106 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How should flushing be handled in a doctrine EntityManager instance shared across different services in symfony2?

    - by Jbm
    I have defined several services in symfony 2 which persist changes to the database. These services have the doctrine instance as one of their dependencies: a.given.service: class: Acme\TestBundle\Service\AGivenService arguments: [@doctrine] If I have two different services and both of them persist objects through the EntityManager, which is obtained like this from the doctrine instance: $em = $doctrine->getEntityManager(); Would all services always share the same EntityManager? If so, how should I handle flushing if I wanted to handle all the changes in a single transaction? I have checked this: http://docs.doctrine-project.org/projects/doctrine-orm/en/2.0.x/reference/transactions-and-concurrency.html and it explains how to handle different transactions in a request, but I want to achieve the opposite, which is having different changes in different services handled as a single transaction. Is there a better approach to handle multiple changes in different services? For now my best bet is having a front-end service in charge of calling the other services and doing the flushing afterwards. Backend services would persist objects but would not do any flushing.

    Read the article

  • How should I deal with sqlite errors?

    - by Dustin
    I have a long running application written in a mix of C and C++ that stores data in sqlite. While I am confident that committed data will remain available (barring mechanical failure) and uncommitted data will not be, it's not clear to me what I can do with this sort of middle state. I do a large number of inserts in a transaction and then commit it. When an error occurs on a given statement, I can schedule it to be attempted at some point in the future. It sounds like some errors might implicitly rollback my transaction (which would be undesirable if true). A larger problem is what happens when my commit itself fails. Currently, I'm just going to continue to retry it until it works. I would expect that whatever would cause my commit to fail may very well also cause a rollback to fail. What is the recommended mechanism for error handling in such a situation?

    Read the article

  • calling hibernate callback

    - by vrkmurali
    HibernateCallback callback=new HibernateCallback() { @Override public Object doInHibernate(Session session) throws HibernateException, SQLException { Transaction transaction=session.beginTransaction(); Query query2 = session.createSQLQuery( "select user_id,user_name from usermasterdao where user_id not in('select usermaster_id from LoginHistoryDAO where logindate between :fromdate and :todate')"); ((SQLQuery) query2).addEntity(UserMasterDAO.class); //query.setParameter("stockCode", "7277"); query2.setParameter("todate", toDate); query2.setParameter("fromdate", fromDate); List result2 = query2.list(); listofNotUsing=result2; System.out.println(result2.size()+"sizeeee"); transaction.commit(); return result2;}} while executing the command getting error like as follows com.vaadin.event.ListenerMethod$MethodException: Invocation of method notUsingButton in com.iton.ioffice.admin.LoginHistory failed. at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:530) at com.vaadin.event.EventRouter.fireEvent(EventRouter.java:164) at com.vaadin.ui.AbstractComponent.fireEvent(AbstractComponent.java:1219) at com.vaadin.ui.Button.fireClick(Button.java:567) at com.vaadin.ui.Button.changeVariables(Button.java:223) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.changeVariables(AbstractCommunicationManager.java:1460) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.handleVariableBurst(AbstractCommunicationManager.java:1404) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.handleVariables(AbstractCommunicationManager.java:1329) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.doHandleUidlRequest(AbstractCommunicationManager.java:761) at com.vaadin.terminal.gwt.server.CommunicationManager.handleUidlRequest(CommunicationManager.java:318) at com.vaadin.terminal.gwt.server.AbstractApplicationServlet.service(AbstractApplicationServlet.java:501) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) Caused by: org.springframework.orm.hibernate3.HibernateQueryException: could not locate named parameter [todate]; nested exception is org.hibernate.QueryPa rameterException: could not locate named parameter [todate] at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:656) at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411) at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:339) at com.iton.ioffice.admin.DAO.impl.UserServiceDAOImpl.outOfProcess(UserServiceDAOImpl.java:304) at com.iton.ioffice.admin.LoginHistory.notUsingButton(LoginHistory.java:298) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:520) ... 23 more Caused by: org.hibernate.QueryParameterException: could not locate named parameter [todate] at org.hibernate.engine.query.ParameterMetadata.getNamedParameterDescriptor(ParameterMetadata.java:99) at org.hibernate.engine.query.ParameterMetadata.getNamedParameterExpectedType(ParameterMetadata.java:105) at org.hibernate.impl.AbstractQueryImpl.determineType(AbstractQueryImpl.java:437) at org.hibernate.impl.AbstractQueryImpl.setParameter(AbstractQueryImpl.java:407) at com.iton.ioffice.admin.DAO.impl.UserServiceDAOImpl$4.doInHibernate(UserServiceDAOImpl.java:285) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:406) ... 31 more please help me

    Read the article

  • How to research unmanaged memory leaks in .NET?

    - by Brandon
    I have a WCF service running over MSMQ. Memory gradually increases over time, indicating that there is some sort of memory leak. I ran the service locally and monitored some counters using PerfMon. Total CLR memory managed heap bytes remains relatively constant, while the process' private bytes increases over time. This leads me to believe that there is some sort of unmanaged memory leak. Assuming that unmanaged memory leak is the issue, how do I address the issue? Are there any tools I could use to give me hints as to what is causing the unmanaged memory leak? Also, all my service is doing is reading from the transactional queue and writing to a database, all as part of a DTC transaction (handled under the hood by requiring a transaction on the service contract). I am not doing anything explicitly with COM or DllImports. Thanks in advance!

    Read the article

  • Timeout Expire on INSERT Query

    - by Sachin Gaur
    I am trying to insert a row in a table using simple INSERT Query in a transaction. It works fine in SQL Server but I am not able to insert the data using my business object. I am calling a SELECT query using the Command as: Using cm As New SqlCommand With cm .Connection = tr.Connection .Transaction = tr .CommandType = CommandType.Text .CommandText = Some Select Query .ExecuteScalar() '' Do something .CommandText = Insert Query .ExecuteNonQuery() End With End Using I am getting the Timeout period expired error at ".ExecuteNonQuery()" line. Any other DML query is running perfectly fine at this point. Can anyone tell me the reason?

    Read the article

  • MySQL count/sum fields

    - by Conor H
    Hi There, What I am trying to achieve is a report on daily financial transactions. With my SQL query I would like to count the total number of cash transactions, the total cash value and the same for checks. I only want to do this for a specified date. Here is a snippet of the query that I am having trouble with. These sum and count commands are processing all the data in the table and not for the selected date. (SELECT SUM(amount) FROM TRANSACTION WHERE payment_type.name = 'cash') AS total_cash, (SELECT COUNT(*) FROM TRANSACTION WHERE payment_type.name = 'cash') AS total_cash_transactions Sorry if I havent posted enough detail as I haven't time. If you need more info just ask.. Cheers.

    Read the article

  • How do I check that an entity is unreferenced in JPA?

    - by Martin
    I have the following model @Entity class Element { @Id int id; @Version int version; @ManyToOne Type type; } @Entity class Type { @Id int id; @Version int version; @OneToMany(mappedBy="type") Collection<Element> elements; @Basic(optional=false) boolean disabled; } and would like to allow Type.disabled = true only if Type.elements is empty. Is there a way to do it atomically? I would like to prevent an insertion of an Element in a transaction while the corresponding Type is being disabled by an other transaction.

    Read the article

  • C++: is it safe to read an integer variable that's being concurrently modified without locking?

    - by Hongli
    Suppose that I have an integer variable in a class, and this variable may be concurrently modified by other threads. Writes are protected by a mutex. Do I need to protect reads too? I've heard that there are some hardware architectures on which, if one thread modifies a variable, and another thread reads it, then the read result will be garbage; in this case I do need to protect reads. I've never seen such architectures though. This question assumes that a single transaction only consists of updating a single integer variable so I'm not worried about the states of any other variables that might also be involved in a transaction.

    Read the article

  • What benefit would I get when using MessageQueueTransaction with ReceiveCompleted event in MessageQu

    - by Jeffrey
    I understand the benefit when using MessageQueueTransaction in the below scenario where 5 messages will be wrapped in a single transaction and until the transaction has been committed 5 individual ReceiveCompleted events will then be raised. using(var t = new MessageQueueTransaction()) using(var q = new MessageQueue("queue path here")) { t.Begin(); q.Send(new Message); q.Send(new Message); q.Send(new Message); q.Send(new Message); t.Commit(); } I understand the usefulness when using Peek() and Receive() which has been mentioned in this question. However I am wondering would I get any benefit when combining MessageQueueTransaction with ReceiveCompleted event.

    Read the article

  • Can JPA do batch update | put | write | insert as pm.makePersistentAll() does in GAE/J

    - by Kenyth
    I searched through multiple discussions here. Can someone just give me a quick and direct answer? And if with JPA you can't do a batch update, what if I don't use transaction, and just use the following flow: em = emf.getEntityManager // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) ... em.close() How does this compare to batch update with regard to performance, and compare to a single transaction commit, measured by RPC calls to datastore server, CPU cycles per request, or so. Does every call to em.persist(..) before em.close() trigger a RPC call to the datastore server? Thanks very much for any response!

    Read the article

  • Does beginTransaction in Hibernate allocate a new DB connection?

    - by illscience
    Hi folks - Just wondering if beginning a new transaction in Hibernate actually allocates a connection to the DB? I'm concerned b/c our server begins a new transaction for each request received, even if that request doesn't interact with the DB. We're seeing DB connections as a major bottleneck, so I'm wondering if I should take the time narrow the scope of my transactions. Searched everywhere and haven't been able to find a good answer. The very simple code is here: SessionFactory sessionFactory = (SessionFactory) Context.getContext().getBean("sessionFactory"); sessionFactory.getCurrentSession().beginTransaction(); sessionFactory.getCurrentSession().setFlushMode(FlushMode.AUTO); thanks very much! a

    Read the article

  • question about Littles Law

    - by davit-datuashvili
    I know that Little's Law states (paraphrased): the average number of things in a system is the product of the average rate at which things leave the system and the average time each one spends in the system, or: n=x*(r+z); x-throughput r-response time z-think time r+z - average response time now i have question about a problem from programming pearls: Suppose that system makes 100 disk accesses to process a transaction (although some systems require fewer, some systems will require several hundred disk access per transaction). How many transactions per hour per disk can the system handle? please help

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • Real World Java EE Patterns by Adam Bien

    - by JuergenKress
    Rethinking Best Practices, A book about rethinking patterns, best practices, idioms and Java EE Real World Java EE Patterns - Rethinking Best Practices discusses patterns and best practices in a structured way, with code from real world projects. This book covers: an introduction into the core principles and APIs of Java EE 6, principles of transactions, isolation levels, CAP and BASE, remoting, pragmatic modularization and structure of Java EE applications, discussion of superfluous patterns and outdated best practices, patterns for domain driven and service oriented components, custom scopes, asynchronous processing and parallelization, real time HTTP events, schedulers, REST optimizations, plugins and monitoring tools, and fully functional JCA 1.6 implementation. Real World Java EE Night Hacks - Dissecting the Business Tier will not only help experienced developers and architects to write concise code, but especially help you to shrink the codebase to unbelievably small sizes :-). Order here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Adam Bien,Real World Java,Java,Java EE,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Megjelent a MySQL 5.5

    - by Lajos Sárecz
    Rekord ido alatt készült el az új MySQL 5.5 verziót, melyet a mai nap jelentett be az Oracle. Ez újabb bizonyítéka annak, hogy az Oracle komolyan fejleszti a MySQL-t is, és igyekszik innovatív megoldásokkal megörvendeztetni a MySQL felhasználókat is. Akinek 'Déja-vu' érzése van, az nem véletlen, hiszen a szeptemberi OpenWorld konferencián került bejelentésre a MySQL 5.5 RC, azaz a Release Candidate, melyrol beszámolt például a hwsw.hu is. Az új verzióban elsosorban a teljesítményen és a skálázhatóságon fejlesztett az Oracle. Így például alapértelmezetten az InnoDB storage engine jön a MySQL-el, aminek köszönhetoen például ACID (atomicity, consistency, isolation, durability) tranzakciókat hajt végre az adatbázis-kezelo (ez mondjuk nem egy apró részlet...). Emellett újdonságot jelent még a majdnem szinkron replikáció, a fejlettebb index és tábla particionálás, valamint diagnosztika terén bevezetésre került egy új PERFORMANCE_SCHEMA, aminek köszönhetoen javult a MySQL menedzselhetosége. A RC verzióval futtatott tesztek jelentos gyorsulást mutattak a MySQL 5.1-es verziójához képest, így érdemes megfontolni a verzió frissítést.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Best way for a technical manager to stay up to date on technology

    - by JoelFan
    My manager asked for a list of technical blogs he should follow to stay current on technology. His problem is he keeps hearing terms that he hasn't heard of (i.e. NoSql, sharding, agure, sevice bus, etc.) and he would prefer to at least have a fighting chance of knowing something about them without having to be reactive and looking them up. Also I think he wants to have a big picture of all the emerging technologies and where they fit in together instead of just learning about each thing in isolation. He asked about blogs but I'm thinking print magazines may also help.

    Read the article

  • Best way for a technical manager to stay up to date on technology

    - by JoelFan
    My manager asked for a list of technical blogs he should follow to stay current on technology. His problem is he keeps hearing terms that he hasn't heard of (i.e. NoSql, sharding, agure, sevice bus, etc.) and he would prefer to at least have a fighting chance of knowing something about them without having to be reactive and looking them up. Also I think he wants to have a big picture of all the emerging technologies and where they fit in together instead of just learning about each thing in isolation. He asked about blogs but I'm thinking print magazines may also help. What should I answer him?

    Read the article

  • Database Consolidation onto Private Clouds - updated for Oracle Database 12c

    - by B R Clouse
    One of our team's most popular white papers has been expanded and updated to discuss Oracle Database 12c.  Now available on our OTN page, the new version of Database Consolidation onto Private Clouds covers best practices for consolidation with pluggable databases that the new mulitenant architecture provides, and expanded information on the database and schema consolidation options.  These are the consolidation models the paper evaluates:   server  database  schema pluggable databases  Key considerations for consolidating workloads which the paper explores: Choosing a consolidation model How PDBs solve the IT complexity problem Isolation in consolidated environments Cloud pool design Complementary workloads Enterprise Manager 12c for consolidation planning and operations Many more white papers have been updated or are new for Oracle Database 12c. We'll continue to highlight those which tie directory to your journey to enterprise cloud.

    Read the article

  • SharePoint Content Database Sizing

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information SharePoint stores majority of its content in SQL Server databases. Many of these databases are concerned with the overall configuration of the system, or managed services support. However, a majority of these databases are those that accept uploaded content, or collaborative content. These databases need to be sized with various factors in mind, such as, Ability to backup/restore the content quickly, thereby allowing for quicker SLAs and isolation in event of database failure. SharePoint as a system avoids SQL transactions in many instances. It does so to avoid locks, but does so at the cost of resultant orphan data or possible data corruption. Larger databases are known to have more orphan items than smaller ones. Also smaller databases keep the problems isolated. As a result, it is very important for any project to estimate content database base sizing estimation. This is especially important in collaborative document centric projects. Not doing this upfront planning can Read full article ....

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Cloud availability of short-term "virgin" Windows instances?

    - by Thorbjørn Ravn Andersen
    I have a situation where we on a regular basis need a freshly installed "virgin" Windows installation to do various work in isolation on, and building one from scratch every time in a vmware instance is getting tedious. Perhaps there are cloud offerings providing a service allowing to request one or more Windows instances and after a very short while they were available for logging in through Remote Desktop? After usage they were just recycled without having to pay for a full Windows license every time. Do this exist for a reasonable price? What is your personal experiences with this?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >