Search Results

Search found 2602 results on 105 pages for '2phase commit'.

Page 38/105 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Issue with Autofac 2 and MVC2 using HttpRequestScoped

    - by Page Brooks
    I'm running into an issue with Autofac2 and MVC2. The problem is that I am trying to resolve a series of dependencies where the root dependency is HttpRequestScoped. When I try to resolve my UnitOfWork (which is Disposable), Autofac fails because the internal disposer is trying to add the UnitOfWork object to an internal disposal list which is null. Maybe I'm registering my dependencies with the wrong lifetimes, but I've tried many different combinations with no luck. The only requirement I have is that MyDataContext lasts for the entire HttpRequest. I've posted a demo version of the code for download here. Autofac modules are set up in web.config Global.asax.cs protected void Application_Start() { string connectionString = "something"; var builder = new ContainerBuilder(); builder.Register(c => new MyDataContext(connectionString)).As<IDatabase>().HttpRequestScoped(); builder.RegisterType<UnitOfWork>().As<IUnitOfWork>().InstancePerDependency(); builder.RegisterType<MyService>().As<IMyService>().InstancePerDependency(); builder.RegisterControllers(Assembly.GetExecutingAssembly()); _containerProvider = new ContainerProvider(builder.Build()); IoCHelper.InitializeWith(new AutofacDependencyResolver(_containerProvider.RequestLifetime)); ControllerBuilder.Current.SetControllerFactory(new AutofacControllerFactory(ContainerProvider)); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } AutofacDependencyResolver.cs public class AutofacDependencyResolver { private readonly ILifetimeScope _scope; public AutofacDependencyResolver(ILifetimeScope scope) { _scope = scope; } public T Resolve<T>() { return _scope.Resolve<T>(); } } IoCHelper.cs public static class IoCHelper { private static AutofacDependencyResolver _resolver; public static void InitializeWith(AutofacDependencyResolver resolver) { _resolver = resolver; } public static T Resolve<T>() { return _resolver.Resolve<T>(); } } UnitOfWork.cs public interface IUnitOfWork : IDisposable { void Commit(); } public class UnitOfWork : IUnitOfWork { private readonly IDatabase _database; public UnitOfWork(IDatabase database) { _database = database; } public static IUnitOfWork Begin() { return IoCHelper.Resolve<IUnitOfWork>(); } public void Commit() { System.Diagnostics.Debug.WriteLine("Commiting"); _database.SubmitChanges(); } public void Dispose() { System.Diagnostics.Debug.WriteLine("Disposing"); } } MyDataContext.cs public interface IDatabase { void SubmitChanges(); } public class MyDataContext : IDatabase { private readonly string _connectionString; public MyDataContext(string connectionString) { _connectionString = connectionString; } public void SubmitChanges() { System.Diagnostics.Debug.WriteLine("Submiting Changes"); } } MyService.cs public interface IMyService { void Add(); } public class MyService : IMyService { private readonly IDatabase _database; public MyService(IDatabase database) { _database = database; } public void Add() { // Use _database. } } HomeController.cs public class HomeController : Controller { private readonly IMyService _myService; public HomeController(IMyService myService) { _myService = myService; } public ActionResult Index() { // NullReferenceException is thrown when trying to // resolve UnitOfWork here. // Doesn't always happen on the first attempt. using(var unitOfWork = UnitOfWork.Begin()) { _myService.Add(); unitOfWork.Commit(); } return View(); } public ActionResult About() { return View(); } }

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • What is "read operations inside transactions can't allow failover" ?

    - by Kenyth
    From time to time I got the following exception message on GAE for my GAE/J app. I searched with Google, no relevant results were found. Does anyone know about this? Thanks in advance for any response! The exception message is as below: Nested in org.springframework.orm.jpa.JpaSystemException: Illegal argument; nested exception is javax.persistence.PersistenceException: Illegal argument: java.lang.IllegalArgumentException: read operations inside transactions can't allow failover at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java: 34) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java: 67) at com.google.appengine.api.datastore.DatastoreServiceImpl $1.run(DatastoreServiceImpl.java:128) at com.google.appengine.api.datastore.TransactionRunner.runInTransaction(TransactionRunner.java: 30) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 111) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 84) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 77) at org.datanucleus.store.appengine.RuntimeExceptionWrappingDatastoreService.get(RuntimeExceptionWrappingDatastoreService.java: 53) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 94) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 106) at org.datanucleus.store.appengine.DatastorePersistenceHandler.fetchObject(DatastorePersistenceHandler.java: 464) at org.datanucleus.state.JDOStateManagerImpl.loadUnloadedFieldsInFetchPlan(JDOStateManagerImpl.java: 1627) at org.datanucleus.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java: 1603) at org.datanucleus.ObjectManagerImpl.performDetachAllOnCommitPreparation(ObjectManagerImpl.java: 3192) at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java: 2931) at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java: 369) at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256) at org.datanucleus.jpa.EntityTransactionImpl.commit(EntityTransactionImpl.java: 104) at org.datanucleus.store.appengine.jpa.DatastoreEntityTransactionImpl.commit(DatastoreEntityTransactionImpl.java: 55) at name.kenyth.playtweets.service.Tx.run(Tx.java:39) at name.kenyth.playtweets.web.controller.TwitterApiController.persistStatus(TwitterApiController.java: 309) at name.kenyth.playtweets.web.controller.TwitterApiController.processStatusesForWebCall(TwitterApiController.java: 271) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates_aroundBody0(TwitterApiController.java: 247) at name.kenyth.playtweets.web.controller.TwitterApiController $AjcClosure1.run(TwitterApiController.java:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7proceed(AuthenticationEnforcement.aj:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7(AuthenticationEnforcement.aj:168) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates(TwitterApiController.java: 129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Method.java:43) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java: 710) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java: 167) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java: 414) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java: 402) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java: 771) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java: 716) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java: 647) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java: 552) at javax.servlet.http.HttpServlet.service(HttpServlet.java:693) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 390) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.tuckey.web.filters.urlrewrite.NormalRewrittenUrl.doRewrite(NormalRewrittenUrl.java: 195) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java: 159) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java: 141) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java: 90) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java: 417) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java: 71) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java: 88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java: 97) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java: 35) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java: 43) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java: 238) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java: 542) at org.mortbay.jetty.HttpConnection $RequestHandler.headerComplete(HttpConnection.java:923) at com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java: 76) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java: 135) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java: 250) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5838) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5836) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java: 24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication

    Read the article

  • Is it possible to call Javascript's onsubmit event programatically on a form?

    - by hoyhoy
    In Ruby on Rails, I'm attempting to update the innerHTML of a div tag using the form_remote_tag helper. This update happens whenever an associated select tag receives an onchange event. The problem is, <select onchange="this.form.submit();">; doesn't work. Nor does document.forms[0].submit(). The only way to get the onsubmit code generated in the form_remote_tag to execute is to create a hidden submit button, and invoke the click method on the button from the select tag. Here's a working ERb partial example. <% form_remote_tag :url => product_path, :update => 'content', :method => 'get' do -%> <% content_tag :div, :id => 'content' do -%> <%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange => "this.form.commit.click" %> <%= submit_tag 'submit_button', :style => "display: none" %> <% end %> <% end %> What I want to do is something like this, but it doesn't work. <% form_remote_tag :url => product_path, :update => 'content', :method => 'get' do -%> <% content_tag :div, :id => 'content' do -%> # the following line does not work <%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange => "this.form.onsubmit()" %> <% end %> <% end %> So, is there any way to remove the invisible submit button for this use case? There seems to be some confusion. So, let me explain. The basic problem is that submit() doesn't call the onsubmit() code rendered into the form. The actual HTML form that Rails renders from this ERb looks like this: <form action="/products/1" method="post" onsubmit="new Ajax.Updater('content', '/products/1', {asynchronous:true, evalScripts:true, method:'get', parameters:Form.serialize(this)}); return false;"> <div style="margin:0;padding:0"> <input name="authenticity_token" type="hidden" value="4eacf78eb87e9262a0b631a8a6e417e9a5957cab" /> </div> <div id="content"> <select id="update" name="update" onchange="this.form.commit.click"> <option value="1">foo</option> <option value="2">bar</option> </select> <input name="commit" style="display: none" type="submit" value="submit_button" /> </div> </form> I want to axe the invisible submit button, but using a straight form.submit appears to not work. So, I need some way to call the form's onsubmit event code. Update: Orion Edwards solution would work if there wasn't a return(false); generated by Rails. I'm not sure which is worse though, sending a phantom click to an invisible submit button or calling eval on the getAttribute('onsubmit') call after removing the return call with a javascript string replacement!

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • [EF + ORACLE] Updating and Deleting Entities

    - by JTorrecilla
    Prologue In previous chapters we have seen how to insert data through EF, with and without sequences. In this one, we are going to see how to Update and delete Data from the DB. Updating data The update of the Entity Data (properties) is a very common and easy action. Before of change any of the properties of the Entity, we can check the EntityState property, and we can see that is EntityState.Unchanged.   For making an update it is needed to get the Entity which will be modified. In the following example, I use the GetEmployeeByNumber to get a valid Entity: 1: EMPLEADOS emp=GetEmployeeByNumber(2); 2: emp.Name="a"; 3: emp.Phone="2"; 4: emp.Mail="aa"; After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. 1: context.SaveChanges(); After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. If we check again the EntityState property we will see that the value will be EntityState.Unchanged.   Deleting Data Another easy action is to delete an Entity.   The first step to delete an Entity from the DB is to select the entity: 1: CLIENTS selectedClient = GetClientByNumber(15); 2: context.CLIENTES.DeleteObject(clienteSeleccionado); Before invoking the DeleteObject function, we will check EntityStet which value must be EntityState.Unchanged. After deleting the object, the state will be changed to EntitySate.Deleted. To commit the action we have to invoke the SaveChanges function. Aftar that, the EntityState property will be EntityState.Detached. Cascade Entity Framework lets cascade updates and deletes, although I never see cascade updates. What is a cascade delete? A cascade delete is an action that allows to delete all the related object to the object we desire to delete. This option could be established in the DB manager, or it could be in the EF model designer. For example: With a given relation (1-N) between clients and requests. The common situation must be to let delete those clients whose have no requests. If we select the relation between both entities, and press the second mouse button, we can see the properties panel of the relation. The props are: This grid shows the relations indicating the Master table(Clients) and the end point (Cabecera or Requests) The property “End 1 OnDelete” indicates the action to do when a Entity from the Master will be deleted. There are two options: - None: No action will be done, it is said, if a Entity has details entities it could not be deleted. - Cascade: It will delete all related entities to the master Entity. If we enable the cascade delete in a relation, and we invoke the DeleteObject function of the set, we could observe that all the related object indicates a Entitystate.Deleted state. Like an update, insert or common delete, until we commit the changes with SaveChanges function, the data would not be commited. Si habilitamos el borrado en cascada de una relación, e invocamos a la función DeleteObject del conjunto, podremos observar que todas las entidades de Detalle (de la relación indicada) presentan el valor EntityState.Deleted en la propiedad EntityState. Del mismo modo que en el borrado, inserción o actualización, hasta que no se invoque al método SaveChanges, los cambios no van a ser confirmados en la Base de Datos. Finally In this chapter we have seen how to update a Entity, how to delete an Entity and how to implement Cascade Deleting through EF. In next chapters we will see how to query the DB data.

    Read the article

  • What are they buying &ndash; work or value?

    - by Jamie Kurtz
    When was the last time you ordered a pizza like this: “I want the high school kid in the back to do the following… make a big circle with some dough, curl up the edges, then put some sauce on it using a small ladle, then I want him to take a handful of shredded cheese from the metal container and spread it over the circle and sauce, then finally I want the kid to place 36 pieces of pepperoni over the top of the cheese” ?? Probably never. My typical pizza order usually goes more like this: “I want a large pepperoni pizza”. In the world of software development, we try so hard to be all things agile. We: Write lots of unit tests We refactor our code, then refactor it some more We avoid writing lengthy requirements documents We try to keep processes to a minimum, and give developers freedom And we are proud of our constantly shifting focus (i.e. we’re “responding to change”) Yet, after all this, we fail to really lean and capitalize on one of agile’s main differentiators (from the twelve principles behind the Agile Manifesto): “Working software is the primary measure of progress.” That is, we foolishly commit to delivering tasks instead of features and bug fixes. Like my pizza example above, we fall into the trap of signing contracts that bind us to doing tasks – rather than delivering working software. And the biggest problem here… by far the most troubling outcome… is that we don’t let working software be a major force in all the work we do. When teams manage to ruthlessly focus on the end product, it puts them on the path of true agile. It doesn’t let them accidentally write too much documentation, or spend lots of time and money on processes and fancy tools. It forces early testing that reveals problems in the feature or bug fix. And it forces lots and lots of customer interaction.  Without that focus on the end product as your deliverable… by committing to a list of tasks instead of a list features and bug fixes… you are doomed to NOT be agile. You will end up just doing stuff, spending time on the keyboard, burning time on timesheets. Doing tasks doesn’t force you to minimize documentation. It makes it much harder to respond to change. And it will eventually force you and the client into contract haggling. Because the customer isn’t really paying you to do stuff. He’s ultimately paying for features and bug fixes. And when the customer doesn’t get what they want, responding with “well, look at the contract - we did all the tasks we committed to” doesn’t typically generate referrals or callbacks. In short, if you’re trying to deliver real value to the customer by going agile, you will most certainly fail if all you commit to is a list of things you’re going to do. Give agile what it needs by committing to features and bug fixes – not a list of ToDo items. So the next time you are writing up a contract, remember that the customer should be buying this: Not this:

    Read the article

  • Version control of software refactoring

    - by Muhammad Alkarouri
    What is the best way of doing version control of large scale refactoring? My typical style of programming (actually of writing documents as well) is getting something out as quickly as possible and then refactoring it. Typically, refactoring takes place at the same time as adding other functionality. In addition to standard refactoring of classes and functions, functions may move from one file to another, files get split and merged or just reordered. For the time being, I am using version control as a lone user, so there is no issue of interaction with other developers at this stage. Still, version control gives me two aspects: Backup and ability to revert to a good version "in case". Looking at the history tells me how the project progressed and the flow of ideas. I am using mercurial on windows using TortoiseHg which enables selections of hunks to commit. The reason I mention this is that I would like advice on the granularity of a commit in refactoring. Should I split refactoring from functionality added always in committing? I have looked at the answers of http://stackoverflow.com/questions/68459/refactoring-and-source-control-how-to but it doesn't answer my question. That question focuses on collaboration with a team. This one concentrates on having a history that is understandable in future (assuming I don't rewrite history as some VCS seem to allow).

    Read the article

  • pointer being freed was not allocated. Complex malloc history help

    - by Martin KS
    I've followed the guides helpfully linked here: http://stackoverflow.com/questions/295778/iphone-debugging-pointer-being-freed-was-not-allocated-errors but the malloc_history is really throwing me for a loop, can anyone shed any light on the following: ALLOC 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ---- FREE 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ALLOC 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_start_compress | _cg_jinit_compress_master | _cg_jinit_c_prep_controller | alloc_sarray | alloc_large | malloc | malloc_zone_malloc ---- FREE 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PL AlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_abort | free_pool | free ALLOC 0x185c800-0x185ea1f [size=8736]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | -[UIImage initWithData:] | _UIImageRefFromData | CGImageSourceCreateImageAtIndex | makeImagePlus | _CGImagePluginInitJPEG | initImageJPEG | calloc | malloc_zone_calloc

    Read the article

  • Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected:

    - by Sujee
    Hi All, I get following hibernate error. I am able to identify the function which causes the issue. Unfortunately there are several DB calls in the function. I am unable to find the line which causes the issue since hibernate flush the session at the end of the transaction. The below mentioned hibernate error looks like a general error. it doesn't even mentioned which Bean causes the issue. Anyone familiar with this hibernate error? Looking forward your help. Thanks in advance. Sujee. org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1 at org.hibernate.jdbc.BatchingBatcher.checkRowCount(BatchingBatcher.java:93) at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:79) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:584) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransacti onManager.java:500) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManag er.java:473) at org.springframework.transaction.interceptor.TransactionAspectSupport.doCommitTransactionAfterReturning(Transaction AspectSupport.java:267) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:170) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:176)

    Read the article

  • UniqueConstraint in EmbeddedConfiguration

    - by LantisGaius
    I just started using db4o on C#, and I'm having trouble setting the UniqueConstraint on the DB.. here's the db4o configuration static IObjectContainer db = Db4oEmbedded.OpenFile(dbase.Configuration(), "data.db4o"); static IEmbeddedConfiguration Configuration() { IEmbeddedConfiguration dbConfig = Db4oEmbedded.NewConfiguration(); // Initialize Replication dbConfig.File.GenerateUUIDs = ConfigScope.Globally; dbConfig.File.GenerateVersionNumbers = ConfigScope.Globally; // Initialize Indexes dbConfig.Common.ObjectClass(typeof(DAObs.Environment)).ObjectField("Key").Indexed(true); dbConfig.Common.Add(new Db4objects.Db4o.Constraints.UniqueFieldValueConstraint(typeof(DAObs.Environment), "Key")); return dbConfig; } and the object to serialize: class Environment { public string Key { get; set; } public string Value { get; set; } } everytime I get to commiting some values, an "Object reference not set to an instance of an object." Exception pops up, with a stack trace pointing to the UniqueFieldValueConstraint. Also, when I comment out the two lines after the "Initialize Indexes" comment, everything runs fine (Except you can save non-unique keys, which is a problem)~ Commit code (In case I'm doing something wrong in this part too:) public static void Create(string key, string value) { try { db.Store(new DAObs.Environment() { Key = key, Value = value }); db.Commit(); } catch (Db4objects.Db4o.Events.EventException ex) { System.Console.WriteLine (DateTime.Now + " :: Environment.Create\n" + ex.InnerException.Message +"\n" + ex.InnerException.StackTrace); db.Rollback(); } } Help please? Thanks in advance~

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • Cannot login to Activeadmin after gem update

    - by user1883793
    After bundle update I cannot login to my Activeadmin, here is the log. Is it because the unpermitted params? do I need to config strong parameter to make admin login work? I already have this code for devise: def configure_permitted_parameters devise_parameter_sanitizer.for(:sign_in) { |u| u.permit(:email, :password, :remember_me) } devise_parameter_sanitizer.for(:sign_up) { |u| u.permit(:username, :email, :password) } end Started POST "/admin/login" for 127.0.0.1 at 2013-10-30 22:33:25 +1300 Processing by ActiveAdmin::Devise::SessionsController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"MhoM/R/oVfad/iiov2zpqfoJ5XOSLda6rTl/V2cMIZE=", "admin_user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "remember_me"=>"0"}, "commit"=>"Login"} Completed 401 Unauthorized in 0.6ms Processing by ActiveAdmin::Devise::SessionsController#new as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"MhoM/R/oVfad/iiov2zpqfoJ5XOSLda6rTl/V2cMIZE=", "admin_user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "remember_me"=>"0"}, "commit"=>"Login"} Unpermitted parameters: email, password, remember_me Rendered /home/jcui/.rvm/gems/ruby-1.9.3-p194/gems/activeadmin-0.6.2/app/views/active_admin/devise/shared/_links.erb (0.6ms) Rendered /home/jcui/.rvm/gems/ruby-1.9.3-p194/gems/activeadmin-0.6.2/app/views/active_admin/devise/sessions/new.html.erb within layouts/active_admin_logged_out (118.2ms) Completed 200 OK in 130.7ms (Views: 129.9ms | ActiveRecord: 0.0ms | Solr: 0.0ms)

    Read the article

  • db4o Replication System: NullReferenceException?

    - by virtualmic
    Hi, I am trying to do standard bi-directional replication as follows. However, I get a NullReferenceException. This is a separate replication project. I did import the classes involved in the original project (such as Item, Category etc.) in this replication project. What am I doing wrong? (If I debug using VS, I can see that changedObjects does have all the changed objects; there seems to be some problem inside Replicate function) IObjectContainer local = Db4oFactory.OpenFile(@"G:\Work\School\MIS\VINMIS\Inventory\bin\Debug\vin.db4o"); IObjectContainer far = Db4oFactory.OpenFile(@"\\crs-lap\c$\vinmis\vin.db4o"); ; IReplicationSession replication = Replication.Begin(local, far); IObjectSet changedObjects = replication.ProviderA().ObjectsChangedSinceLastReplication(); while(changedObjects.HasNext()) replication.Replicate(changedObjects.Next()); // Exception!!! replication.Commit(); changedObjects = replication.ProviderB().ObjectsChangedSinceLastReplication(); while (changedObjects.HasNext()) replication.Replicate(changedObjects.Next()); replication.Commit(); Regards, Saurabh.

    Read the article

  • git branch naming best practices

    - by skiphoppy
    I've been using a local git repository interacting with my group's CVS repository for several months, now. I've made an almost neurotic number of branches, most of which have thankfully merged back into my trunk. But naming is starting to become an issue. If I have a task easily named with a simple label, but I accomplish it in three stages which each include their own branch and merge situation, then I can repeat the branch name each time, but that makes the history a little confusing. If I get more specific in the names, with a separate description for each stage, then the branch names start to get long and unwieldy. I did learn looking through old threads here that I could start naming branches with a / in the name, i.e., topic/task, or something like that. I may start doing that and seeing if it helps keep things better organized. What are some best practices for naming git branches? Edit: Nobody has actually suggested any naming conventions. I do delete branches when I'm done with them. I just happen to have several around due to management constantly adjusting my priorities. :) As an example of why I might need more than one branch on a task, suppose I need to commit the first discrete milestone in the task to the group's CVS repository. At that point, due to my imperfect interaction with CVS, I would perform that commit and then kill that branch. (I've seen too much weirdness interacting with CVS if I try to continue to use the same branch at that point.)

    Read the article

  • Need help with animation on iPhone

    - by Arun Ravindran
    I'm working on an animated clock application for the iPhone, and I have to pivot all the 3 nodes in the view, which I have obtained in the following code: [CATransaction begin]; [CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions]; clockarm.layer.anchorPoint = CGPointMake(0.0, 0.0); [CATransaction commit]; [CATransaction begin]; [CATransaction setValue:(id)kCFBooleanFalse forKey:kCATransactionDisableActions]; [CATransaction setValue:[NSNumber numberWithFloat:50.0] forKey:kCATransactionAnimationDuration]; CABasicAnimation *animation; animation = [CABasicAnimation animationWithKeyPath:@"transform.rotation.z"]; animation.fromValue = [NSNumber numberWithFloat:-60.0]; animation.toValue = [NSNumber numberWithFloat:2 * M_PI]; animation.timingFunction = [CAMediaTimingFunction functionWithName: kCAMediaTimingFunctionLinear]; animation.delegate = self; [clockarm.layer addAnimation:animation forKey:@"rotationAnimation"]; animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [CATransaction commit]; The problem it's just rotating once, ie. only 360 degree and then stopping. I want to raotate the needles indefinitely. How would I do that?

    Read the article

  • How to permanently remove xcuserdata under the project.xcworkspace and resolve uncommitted changes

    - by JeffB6688
    I am struggling with a problem with a merge conflict (see Cannot Merge due to conflict with UserInterfaceState.xcuserstate). Based on feedback, I needed to remove the UserInterfaceState.xcuserstate using git rm. After considerable experimentation, I was able to remove the file with "git rm -rf project.xcworkspace/xcuserdata". So while I was on the branch I was working on, it almost immediately came back as a file that needed to be committed. So I did the git rm on the file again and just switched back to the master. Then I performed a git rm on the file again. The operation again removed the file. But I am still stuck. If I try to merge the branch into the master branch, it again says that I have uncommitted changes. So I go to commit the change. But this time, it shows UserInterfaceState.xcuserstate as the file to commit, but the box is unchecked and it can't be checked. So I can't move forward. Is there a way to use 'git rm' to permanently remove xcuserdata under the project.xcworkspace? Help!! Any ideas?

    Read the article

  • SVN checkout browser

    - by phazei
    I've been looking all over for a SVN browser. Now I'm not talking about anything like WebSVN or TRAC, I don't want to browse the repository; I want to browse the checkout. I'm looking for a program that lets me browse the checkout (working copy) and shows me the info I'd normally need to SSH for. So I could mark specific files or folders for some commit button, or see the status, or view a diff between the working and a prev version. Basically a web GUI for a svn checkout. A [windows] program that can let you work on a remote checkout as if it were local would also work. Currently I have a checkout on my server running under dev.mysite.com. I log in via ftp and edit and upload the files. I also keep SSH open so I can do a svn st to see what files I've worked on and to commit changes. I want to work on the files on the same environment so I can't simply use a local checkout. But I don't want to need to work via SSH. Are there any apps such as I described? Like a repo browser but for checkouts to do commits. Like WebTortoiseSVN or such. Thanks

    Read the article

  • Porting Oracle Procedure to PostgreSQL

    - by Grasper
    I am porting an Oracle function into Postgres PGPLSQL.. I have been using this guide: http://www.postgresql.org/docs/8.1/static/plpgsql.html CREATE OR REPLACE PROCEDURE DATA_UPDATE (mission NUMBER, task NUMBER) AS BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT NVL(SUM(RR.TRQ_FUEL_OFFLOAD),0), NVL(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF ; COMMIT ; END ; I've got it this far: CREATE OR REPLACE FUNCTION DATA_UPDATE (NUMERIC, NUMERIC) RETURNS integer as ' DECLARE mission ALIAS for $1; task ALIAS for $2; BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COALESCE(SUM(RR.TRQ_FUEL_OFFLOAD),0), COALESCE(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF; COMMIT; END; ' LANGUAGE plpgsql; This is the error I get: ERROR: syntax error at or near "SELECT" LINE 1: ...OTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COA... I do not know why this isn't working... any ideas?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >