Search Results

Search found 4808 results on 193 pages for 'estate master'.

Page 40/193 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • iPad: SplitView does not rotate

    - by raj.tiwari
    I have the following setup: A subclass of UISplitViewController that creates the master and detail view controllers in the constructor. Master and Detail view controllers that both override shouldAutorotateToInterfaceOrientation to return `YES'. Detail view controller implements the UISplitViewControllerDelegate protocol and deals with the popover. I am observing two weird issues that might be interrelated: When the split view comes up (in portrait mode - default on simulator), the Master view is visible. It should not be. When I rotate the simulator, the view does not "right" itself. My UISplitViewController subclass does not override shouldAutorotateToInterfaceOrientation. However, both master and details view controllers do and return YES. Any ideas what I might be doing wrong? Is this a simulator bug? Thanks. -Raj

    Read the article

  • Treeview - Link Button Post Back Problem

    - by cagin
    Hi there I' m working on a web application. That has a master page and two pages. These pages under the that master page. I am trying navigate that pages with a TreeView which on the master page. When i click to treeview node i can go to page which i want but there is no postback. But if i use linkbutton postback event happen. I use a break point on master page's pageload event. When i use treeview, v.s doesnt stop on break point line but if i use link button v.s stop on that line. How can i do postback with using treeview? Thanks for your helps

    Read the article

  • How to check if returning View is of type PartialViewResult or a ViewUserControl

    - by mare
    In the method FindView() I have to somehow check if the View we are requesting is a ViewUserControl. Because otherwise I get an error saying: "A master name cannot be specified when the view is a ViewUserControl." This is my custom view engine code right now: public class PendingViewEngine : VirtualPathProviderViewEngine { public PendingViewEngine() { // This is where we tell MVC where to look for our files. /* {0} = view name or master page name * {1} = controller name */ MasterLocationFormats = new[] {"~/Views/Shared/{0}.master", "~/Views/{0}.master"}; ViewLocationFormats = new[] { "~/Views/{1}/{0}.aspx", "~/Views/Shared/{0}.aspx", "~/Views/Shared/{0}.ascx", "~/Views/{1}/{0}.ascx" }; PartialViewLocationFormats = new[] {"~/Views/{1}/{0}.ascx", "~/Views/Shared/{0}.ascx"}; } protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath) { return new WebFormView(partialPath, ""); } protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath) { return new WebFormView(viewPath, masterPath); // this line produces the error because we cannot set master page on ASCXs, so I need an "if" here } public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) { if (controllerContext.HttpContext.Request.IsAjaxRequest()) return base.FindView(controllerContext, viewName, "Modal", useCache); return base.FindView(controllerContext, viewName, "Site", useCache); } } The above ViewEngine fails on calls like this: <% Html.RenderAction("Display", "MyController", new { zoneslug = "some-zone-slug" });%> The action I am rendering here is this: public ActionResult Display(string zoneslug) { WidgetZone zone; if (!_repository.IsUniqueSlug(zoneslug)) zone = (WidgetZone) _repository.GetInstance(zoneslug); else { // do something here } // WidgetZone used here is WidgetZone.ascx, so a partial return View("WidgetZone", zone); } I cannot use RenderPartial because you cannot send route values to RenderPartial the way you can to RenderAction. To my knowledge there is no way to provide RouteValueDictionary to RenderPartial() like the way you can to RenderAction().

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • Preview a git push

    - by Saverio Miroddi
    How can I see which commits are actually going to be pushed to a remote repository? As far as I know, whenever I pull master from the remote repository, commits are likely to be generated, even if they're empty. This causes the local master to be 'forward' even if there is really nothing to push. Now, if I try (from master): git cherry origin master I have an idea of what's going to be pushed, though this also display some commits that I've already pushed. Is there a way to display only the new content that's going to be pushed?

    Read the article

  • git strategy to have a set of commits limited to a particular branch

    - by becomingGuru
    I need to merge between dev and master frequently. I also have a commit that I need to apply to dev only, for things to work locally. Earlier I only merged from dev to master, so I had a branch production_changes that contained the "undo commit" of the dev special commit. and from the master, I merged this. Used to work fine. Now each time I merge from dev to master and vice versa, I am having to cherry-pick and apply the same commit again and again :(. Which is UGLY. What strategy can I adapt so that I can seamlessly merge between 2 branches, yet retain some of the changes only on one of those branches?

    Read the article

  • Cross domain login - what to store in the database?

    - by Jenkz
    I'm working on a system which will allow me to login to the same system via various domains. (www.example.com, www.mydomain.com, sub.domain.com etc) The following threads form the basis of my research so far: Single Sign On across multiple domains Cross web domain login with .net membership What I want to happen is that If I am logged in on the master domain and I visit a page on a client domain to be automatically logged in on the client. Obviously If I am not logged in on the master, I will need to enter my username and password. Walkthrough: 1. User logs in on master site 2. User navigates to client site 3. Client site re-directs to master site to see if User is logged in. 4. If User is logged in on master, record a RFC 4122 token ID and send this back to the client site. 5. Client site then looks up the token ID in the central database and logs this user in. This might eventually end up running on more than once instance of PHP and Apache, so I can't just store: token_id, php_session_id, created Is there any problem with me storing and using this: token_id, username, hashed_password, created Which is deleted on use, or automatically after x seconds.

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Review Board workflow for Mercurial repository

    - by pachanga
    At my company we are trying to add code reviewing practices into our development process and for that purpose we decided to use Review Board. While Review Board should work out of the box for Subversion the workflow for Mercurial looks a little bit involved. Firstly it seems that only post reviewing(via post-review script) is supported for this type of repo. Secondly documentation is unclear whether post-review actually supports Mercurial(it only mentions git). Could you folks describe your workflow in detail please? Am I right in my thinking it should be something like this: Developer: clone master repo clone feature repo from local master repo hack-hack in feature repo commit into feature repo somehow run post-review from feature repo against parent master repo Reviewer: review diff if OK then pull to the master repo from the feature repo

    Read the article

  • dynamically created radiobuttonlist

    - by Janet
    Have a master page. The content page has a list with hyperlinks containing request variables. You click on one of the links to go to the page containing the radiobuttonlist (maybe). First problem: When I get to the new page, I use one of the variables to determine whether to add a radiobuttonlist to a placeholder on the page. I tried to do it in page)_load but then couldn't get the values selected. When I played around doing it in preInit, the first time the page is there, I can't get to the page's controls. (Object reference not set to an instance of an object.) I think it has something to do with the MasterPage and page content? The controls aren't instantiated until later? (using vb by the way) Second problem: Say I get that to work, once I hit a button, can I still access the passed request variable to determine the selected item in the radiobuttonlist? Protected Sub Page_PreInit(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.PreInit 'get sessions for concurrent Dim Master As New MasterPage Master = Me.Master Dim myContent As ContentPlaceHolder = CType(Page.Master.FindControl("ContentPlaceHolder1"), ContentPlaceHolder) If Request("str") = "1" Then Dim myList As dsSql = New dsSql() ''''instantiate the function to get dataset Dim ds As New Data.DataSet ds = myList.dsConSessionTimes(Request("eid")) If ds.Tables("conSessionTimes").Rows.Count > 0 Then Dim conY As Integer = 1 CType(myContent.FindControl("lblSidCount"), Label).Text = ds.Tables("conSessionTimes").Rows.Count.ToString Sorry to be so needy - but maybe someone could direct me to a page with examples? Maybe seeing it would help it make sense? Thanks....JB

    Read the article

  • Git: Can I commit my working directory to a new branch without commiting it to a current branch?

    - by Noli
    Somewhat new at Git.. I am working on a project, and had all of my tests passing on the master branch. I then made some changes, and when everything started failing, I realized that maybe I should have made those changes in a different branch. Is there I way I can commit the changes to a new branch without commiting them to my master branch, so that the master still has my passing tests?

    Read the article

  • Simplify a batch of git commands

    - by Bogdan Gusiev
    When I want to merge one branch to another I use to do the following(in this example master to custom): git checkout master && git pull && git checkout custom && git merge master Can somebody suggest how to simplify this? Thanks, Bogdan.

    Read the article

  • Microcontroller to Microcontroller SPI communication

    - by onaclov2000
    Hello again, I was doing some reading and have even gotten a "master" SPI working on my microcontroller. Here is my question, basically if the master wants to initialize a write to the slave we write to the SSPBUF, how do we control what the slave responds with? The datasheet doesn't seem really clear to me the order of events in that case. I.E. Master puts a char into the SSPBUF, this initiates the SPI module to send data to the slave, during the shift, the slave returns a byte. In the slave side, is there something that tells you you have incoming data, and you can write to your SSPBUF first, THEN accept the data? OR Do you have to write to the SSPBUF the first "return value" you want sent back before the master can have an opportunity to initiate a transfer?

    Read the article

  • Working with images in asp.net MVC ViewMasterPage in design mode

    - by amit.codename13
    While designing a master page i am adding a number of images to it. I have an image tag inside the master page, <img src="../../Content/Images/img19.jpg" class="profileImage" /> When i run my app, the image doesn't show up in the browser because the src path in the page that browser gets is same as in the master page. ie. "../../Content/Images/img19.jpg" But it should have been "Content/Images/img19.jpg" If i correct the src path in master page as <img src="Content/Images/img19.jpg" class="profileImage" /> Then I can see the image in the browser but not in design mode. Any help is appreciated.

    Read the article

  • Recover history from foolish git-svn merge

    - by Gregg Lind
    the players: master: the svn branch (actual, not local trackign) mybranch: a local branch My mistake: [master] git svn rebase [master] git merge mybranch [master] git svn dcommit I did this twice. Is there a way I can remedy all this? I was thinking something like: git checkout --hard [commit before the merging] git dcommit # that to the svn? git rebase mybranch git dcommit But this doesn't seem to work. (I know I should a. working from a local tracking branch and b. have rebased rather than merged) I'm in the frantic / willing to send beer to respondents stage :)

    Read the article

  • ASP.NET MVC: MetaTags; setting methodology, best practices

    - by MVCDummy09
    When I created a default MVC application in VS2K10, the master view (Site.Master) had a ContentPlaceHolder for the <title> tag. Is there a better way to set metatags like title and description than using a ContentPlaceHolder in the master and setting that ContentPlaceHolder's value in each view? How do you configure your views' metatags in a large-scale site with dozens and dozens of pages?

    Read the article

  • Abstract Forms using WinForms

    - by pm_2
    I’m trying to achieve a similar affect in a WinForms application as a master form does with ASP.NET. My initial thoughts on this were to create a base form and declare it as abstract, but the compiler doesn’t seem to let me do this. public abstract partial class Master : Form { public Master() { InitializeComponent(); } } So I have two questions: Why will the compiler not let me do this? Am I using the wrong syntax or is this a genuine restriction. Can anyone suggest a workaround or better way to do this? EDIT: InitializeComponent code: private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.mainMenu1 = new System.Windows.Forms.MainMenu(); this.Menu = this.mainMenu1; this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi; this.Text = "Master"; this.AutoScroll = true; }

    Read the article

  • homebrew path issue

    - by Shaun Stanislaus
    Master:~ shaunstanislaus$ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press enter to continue ==> Downloading and Installing Homebrew... remote: Counting objects: 82368, done. remote: Compressing objects: 100% (39323/39323), done. remote: Total 82368 (delta 56782), reused 65301 (delta 42220) Receiving objects: 100% (82368/82368), 11.68 MiB | 1.59 MiB/s, done. Resolving deltas: 100% (56782/56782), done. From https://github.com/mxcl/homebrew * [new branch] master -> origin/master HEAD is now at 2ea1a0e smpeg: depends on gtk ==> Installation successful! You should run `brew doctor' *before* you install anything. Now type: brew help Master:~ shaunstanislaus$ brew doctor -bash: /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory Master:~ shaunstanislaus$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Users/shaunstanislaus/Library/Application Support/GoodSync:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/Users/shaunstanislaus/.ec2/bin:/Users/shaunstanislaus/.rvm/bin /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory how do i fix this path issue? i can't use brew command and i think i previously symlink to wrong location. please advice, thank you.

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • Is there a way to optimize this update query?

    - by SchlaWiener
    I have a master table called "parent" and a related table called "childs" Now I run a query against the master table to update some values with the sum from the child table like this. UPDATE master m SET quantity1 = (SELECT SUM(quantity1) FROM childs c WHERE c.master_id = m.id), quantity2 = (SELECT SUM(quantity2) FROM childs c WHERE c.master_id = m.id), count = (SELECT COUNT(*) FROM childs c WHERE c.master_id = m.id) WHERE master_id = 666; Which works as expected but is not a good style because I basically make multiple SELECT querys on the same result. Is there a way to optimize that? (Making a query first and storing the values is not an option. I tried this: UPDATE master m SET (quantity1, quantity2, count) = ( SELECT SUM(quantity1), SUM(quantity2), COUNT(*) FROM childs c WHERE c.master_id = m.id ) WHERE master_id = 666; but that doesn't work.

    Read the article

  • Git subtree workflow

    - by Cedric
    In my current project I'm using an open source forum (https://github.com/vanillaforums/Garden.git). I was planning on doing something like this : git remote add vanilla_remote https://github.com/vanillaforums/Garden.git git checkout -b vanilla vanilla_remote/master git checkout master git read-tree --prefix=vanilla -u vanilla This way I can make change into the vanilla folder (like changing config) and commit it to my master branch and I can also switch into my vanilla branch to fetch updates. My problem is when I try to merge the branch together git checkout vanilla git pull git checkout master git merge --squash -s subtree --no-commit vanilla The problem is that the "update commit" goes on top of my commits and "overwrite" my change. I would rather like to have my commits replay on top of the update. Is there a simple way to do that? I'm not very good in git so maybe this is the wrong approach. Also, I really don't want to mix my history with the vanilla history.

    Read the article

  • Parsing getopts in bash

    - by ABach
    I've got a bash function that I'm trying to use getopts with and am having some trouble. The function is designed to be called by itself (getch), with an optional -s flag (getch -s), or with an optional string argument afterward (so getch master and getch -s master are both valid). The snippet below is where my problem lies - it isn't the entire function, but it's what I'm focusing on: getch() { if [ "$#" -gt 2 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then echo "Usage: $0 [-s] [branch-name]" >&2 return 1 fi while getopts "s" opt; do echo $opt # This line is here to test how many times we go through the loop case $opt in s) squash=true shift ;; *) ;; esac done } The getch -s master case is where the strangeness happens. The above should spit out s once, but instead, I get this: [user@host:git-repositories/temp]$ getch -s master s s [user@host:git-repositories/temp]$ Why is it parsing the -s opt twice?

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >