Search Results

Search found 4995 results on 200 pages for 'svn merge reintegrate'.

Page 184/200 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Why is my quick sort so slow?

    - by user513075
    Hello, I am practicing writing sorting algorithms as part of some interview preparation, and I am wondering if anybody can help me spot why this quick sort is not very fast? It appears to have the correct runtime complexity, but it is slower than my merge sort by a constant factor of about 2. I would also appreciate any comments that would improve my code that don't necessarily answer the question. Thanks a lot for your help! Please don't hesitate to let me know if I have made any etiquette mistakes. This is my first question here. private class QuickSort implements Sort { @Override public int[] sortItems(int[] ts) { List<Integer> toSort = new ArrayList<Integer>(); for (int i : ts) { toSort.add(i); } toSort = partition(toSort); int[] ret = new int[ts.length]; for (int i = 0; i < toSort.size(); i++) { ret[i] = toSort.get(i); } return ret; } private List<Integer> partition(List<Integer> toSort) { if (toSort.size() <= 1) return toSort; int pivotIndex = myRandom.nextInt(toSort.size()); Integer pivot = toSort.get(pivotIndex); toSort.remove(pivotIndex); List<Integer> left = new ArrayList<Integer>(); List<Integer> right = new ArrayList<Integer>(); for (int i : toSort) { if (i > pivot) right.add(i); else left.add(i); } left = partition(left); right = partition(right); left.add(pivot); left.addAll(right); return left; } }

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • Visual Studio 2012 won't start

    - by David Aleu
    I installed VS2012 Premium from our MSDN subscription and it was working fine the first couple of days but then I installed a few extensions I can't now start VS2012 and it gives the error: Faulting application name: devenv.exe, version: 11.0.50727.1, time stamp: 0x5011ecaa Faulting module name: ntdll.dll, version: 6.1.7601.17725, time stamp: 0x4ec49b8f Exception code: 0xc0000374 Fault offset: 0x000ce6c3 Faulting process id: 0xee8 Faulting application start time: 0x01cd89bb777fc1dd Faulting application path: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll I'm running it on Windows 7 64 bit. I've tried to repair, uninstall and install again and nothing. I tried to restore to a previous restore system point but nothing. The extensions I installed I can remember: VS10x Code Map VSCommands Visual SVN Nuget manager (all the above my colleagues have it too and it works fine for them) and: Web Essentials Visual Studio Color Theme Editor SlowCheetah Mobile Ready HTML5 Questions are: Anyone else has had this problem? Is there a way I can uninstall extensions from a command line or software? (I removed the extensions folder but that doesn't do anything) Can I repair the "C:\Windows\SysWOW64\ntdll.dll"? Is it really a problem with this dll? I haven't been able to find any similar issue in other versions and because VS2012 is new doesn't seem to be much information either.

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • Starting a code library.

    - by Rob Stevenson-Leggett
    Hi, I've been meaning to start a library of reusable code snippets for a while and never seem to get round to it. I think my main problems are: Where to start. What structure should my library take? Should it be a compiled library (where appropriate or just classes I can drop into any project? Or a library project that can be included? In my experience, a built library will quickly become out of date and the source will get lost. So I'm leaning towards source libraries that I can export from SVN and include in any project. Intellectual property. I am employeed, so a lot of the code I write is not my IP. How can I ensure that I don't give my own IP away using it on projects in work and at home? I'm thinking the best way would be to licence my library with an open source licence and make sure I only add to it in my own time using my own equipment and therefore making sure that if I use it in a work project the same rules apply as if I was using a third party library. I write in many different languages and often would require two or more parts of this library. Should I look at implementing a few template projects and a core project for each of my chosen reusable components and languages? Has anyone else got this sort of library and how do you organise and update it?

    Read the article

  • OpenMP: Get total number of running threads

    - by Konrad Rudolph
    I need to know the total number of threads that my application has spawned via OpenMP. Unfortunately, the omp_get_num_threads() function does not work here since it only yields the number of threads in the current team. However, my code runs recursively (divide and conquer, basically) and I want to spawn new threads as long as there are still idle processors, but no more. Is there a way to get around the limitations of omp_get_num_threads and get the total number of running threads? If more detail is required, consider the following pseudo-code that models my workflow quite closely: function divide_and_conquer(Job job, int total_num_threads): if job.is_leaf(): # Recurrence base case. job.process() return left, right = job.divide() current_num_threads = omp_get_num_threads() if current_num_threads < total_num_threads: # (1) #pragma omp parallel num_threads(2) #pragma omp section divide_and_conquer(left, total_num_threads) #pragma omp section divide_and_conquer(right, total_num_threads) else: divide_and_conquer(left, total_num_threads) divide_and_conquer(right, total_num_threads) job = merge(left, right) If I call this code with a total_num_threads value of 4, the conditional annotated with (1) will always evaluate to true (because each thread team will contain at most two threads) and thus the code will always spawn two new threads, no matter how many threads are already running at a higher level. I am searching for a platform-independent way of determining the total number of threads that are currently running in my application.

    Read the article

  • Confirm bug Magento 1.4 'show/hide editor' in CMS

    - by latvian
    Hi When entering code in CMS static block(possible page as well) and in this code there is empty DIV tags such us: <a href="javascript:hide1(),show2(),hide3()"><div class="dropoff_button"></div></a> The DIV tags will be gone next time you open the block to edit. it will look as this <a href="javascript:hide1(),show2(),hide3()"> </a> without the div tags ...and saving again it modifies your code. I think it something to do with the 'show/hide editor'. By default it goes into the WYSIWYG editor, so when updating static block i don't see any other solution than 1."hide the editor' by clicking 'show/hide editor' 2.delete the old code from the editor 3. get code that doesn't miss the DIVs 4. Merge new code with code in 3 in some other editing software than magento 5. paste result in the magento editor, 6. Save Is this bug? What is your solution? Can i turn of WYSIWYG editor?

    Read the article

  • How are developers using source control, I am trying to find the most efficient way to do source con

    - by RJ
    I work in a group of 4 .Net developers. We rarely work on the same project at the same time but it does happen from time to time.We use TFS for source control. My most recent example is a project I just placed into production last night that included 2 WCF services and a web application front end. I worked out of a branch called "prod" because the application is brand new and has never seen the light of day. Now that the project is live, I need to branch off the prod branch for features, bugs, etc... So what is the best way to do this? Do I simple create a new branch and sort of archive the old branch and never use it again? Do I branch off and then merge my branch changes back into the prod branch when I want to deploy to production? And what about the file and assembly version. They are currently at 1.0.0.0. When do they change and why? If I fix a small bug, which number changes if any? If I add a feature, which number changes if any? What I am looking for is what you have found to be the best way to efficiently manage source control. Most places I have worked always seem to bang heads with the source control system in on way or another and I would just like to find out what you have found that works the best.

    Read the article

  • Merging similar dictionaries in a list together

    - by WonderSteve
    New to python here. I've been pulling my hair for hours and still can't figure this out. I have a list of dictionaries: [ {'FX0XST001.MID5': '195', 'Name': 'Firmicutes', 'Taxonomy ID': '1239', 'Type': 'phylum'} {'FX0XST001.MID13': '4929', 'Name': 'Firmicutes', 'Taxonomy ID': '1239','Type': 'phylum'}, {'FX0XST001.MID6': '826', 'Name': 'Firmicutes', 'Taxonomy ID': '1239', 'Type': 'phylum'}, . . . . {'FX0XST001.MID6': '125', 'Name': 'Acidobacteria', 'Taxonomy ID': '57723', 'Type': 'phylum'} {'FX0XST001.MID25': '70', 'Name': 'Acidobacteria', 'Taxonomy ID': '57723', 'Type': 'phylum'} {'FX0XST001.MID40': '40', 'Name': 'Acidobacteria', 'Taxonomy ID': '57723', 'Type': 'phylum'} ] I want to merge the dictionaries in the list based on their Type, Name, and Taxonomy ID [ {'FX0XST001.MID5': '195', 'FX0XST001.MID13': '4929', 'FX0XST001.MID6': '826', 'Name': 'Firmicutes', 'Taxonomy ID': '1239', 'Type': 'phylum'} . . . . {'FX0XST001.MID6': '125', 'FX0XST001.MID25': '70', 'FX0XST001.MID40': '40', 'Name': 'Acidobacteria', 'Taxonomy ID': '57723', 'Type': 'phylum'}] I have the data structure setup like this because I need to write the data to CSV using csv.DictWriter later. Would anyone kindly point me to the right direction?

    Read the article

  • JPA: persisting object, parent is ok but child not updated

    - by James.Elsey
    Hello, I have my domain object, Client, I've got a form on my JSP that is pre-populated with its data, I can take in amended values, and persist the object. Client has an abstract entity called MarketResearch, which is then extended by one of three more concrete sub-classes. I have a form to pre-populate some MarketResearch data, but when I make changes and try to persist the Client, it doesn't get saved, can someone give me some pointers on where I've gone wrong? My 3 domain classes are as follows (removed accessors etc) public class Client extends NamedEntity { @OneToOne @JoinColumn(name = "MARKET_RESEARCH_ID") private MarketResearch marketResearch; ... } @Inheritance(strategy = InheritanceType.JOINED) public abstract class MarketResearch extends AbstractEntity { ... } @Entity(name="MARKETRESEARCHLG") public class MarketResearchLocalGovernment extends MarketResearch { @Column(name = "CURRENT_HR_SYSTEM") private String currentHRSystem; ... } This is how I'm persisting public void persistClient(Client client) { if (client.getId() != null) { getJpaTemplate().merge(client); getJpaTemplate().flush(); } else { getJpaTemplate().persist(client); } } To summarize, if I change something on the parent object, it persists, but if I change something on the child object it doesn't. Have I missed something blatantly obvious? Thanks

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • Tree-like queues

    - by Rehno Lindeque
    I'm implementing a interpreter-like project for which I need a strange little scheduling queue. Since I'd like to try and avoid wheel-reinvention I was hoping someone could give me references to a similar structure or existing work. I know I can simply instantiate multiple queues as I go along, I'm just looking for some perspective by other people who might have better ideas than me ;) I envision that it might work something like this: The structure is a tree with a single root. You get a kind of "insert_iterator" to the root and then push elements onto it (e.g. a and b in the example below). However, at any point you can also split the iterator into multiple iterators, effectively creating branches. The branches cannot merge into a single queue again, but you can start popping elements from the front of the queue (again, using a kind of "visitor_iterator") until empty branches can be discarded (at your discretion). x -> y -> z a -> b -> { g -> h -> i -> j } f -> b Any ideas? Seems like a relatively simple structure to implement myself using a pool of circular buffers but I'm following the "think first, code later" strategy :) Thanks

    Read the article

  • Using Java classes(whole module with Spring/Hibernate dependency) in Grails

    - by Sitaram
    I have a Java/Spring/Hibernate application with a payment module. Payment module has some domain classes for payment subscription and transactions etc. Corresponding hibernate mapping files are there. This module uses applicationContext.xml for some of the configuration it needs. Also, This module has a PaymentService which uses a paymentDAO to do all database related work. Now, I want to use this module as it is(without any or minimal re-writing) in my other application(Grails application). I want to bring in the payment module as a jar or copy the source files to src/java folder in Grails. With that background, I have following queries: Will the existing applicationContext.xml for Spring configuration in the module will work as it is in Grails? Does it merge with rest of Grails's Spring config? Where do I put the applicationContext.xml? classpath? src/java should work? Can I bundle the applicationContext.xml in Jar(If I use jar option) and can overwrite in Grails if anything needs to be changed? Multiple bean definition problems in that case? PaymentService recognized as regular service? Will it be auto-injected in controllers and/or other services? Will PaymentDAO use the datasource configuration of Grails? Where do I put the hbm files of this module? Can I bundle the hbm files in Jar(If I use jar option) and can overwrite in Grails if anything needs to be changed? Which hbms are picked? or, there will be problems with that? Too many questions! :) All these concerns are actually before trying. I am going to try this in next few days(busy currently). Any help is appreciated. Thanks. Sitaram Meena

    Read the article

  • Visual SourceSafe (VSS): "Access to file (filename) denied" error

    - by tk-421
    Hi, can anybody help with the above SourceSafe error? I've spent hours trying to find a fix. I've also Googled the heck out of it but couldn't find a scenario matching mine, because in my case only a few files (not all) are affected. Here's what I found: only a few files in my project generate this error other files in the same directory (for example, App_Code has one of the problem files) work fine I've tried checking out from both the VSS client and Visual Studio another developer can check out the main problem file without any problems This sounds like a permission issue for my user, right? However: I found the location of one of the problem files in VSS's data directory (using VSS's naming format, as in 'fddaaaaa.a') and checked its permissions; everything looks fine and its permissions match those of other files I can check out successfully I can see no differences in the file properties between working and non-working files What else can I check? Has anyone encountered this problem before and found a solution? Thanks. P.S.: SourceGear, svn or git are not options, unfortunately. P.P.S.: Tried unsuccessfully to add tag "sourcesafe." EDIT: Hey Paddy, I tried to click 'add comment' to respond to your comment, but I'm getting a javascript error when loading this page in IE8 ("jquery undefined," etc.) so this isn't working. This is when checking out files, and yes, I've obliterated my local copy more times than I can remember. ;) EDIT 2: Thanks for the responses, guys (again I can't 'add comment' due to jQuery not loading, maybe blocked as discussed in Meta). If the problem was caused by antivirus or a bad disk, would other users still be able to check out the file(s)? That's the case here, which makes me think it's a permission issue specific to my account. However I've looked at the permissions and they match both other users' settings and settings on other files which I can check out.

    Read the article

  • how to switch views according to command

    - by Veer
    I've a main view and many usercontrols. The main view contains a two column grid, with the first column filled with a listbox whose datatemplate consists of a usercontrol and the second column filled with another usercontrol. These two usercontrols have the same datacontext. MainView: <Grid> //Column defs ... <ListView Grid.Column="0" ItemSource="{Binding Foo}"> ... <DataTemplate> <Views: FooView1 /> </DataTemplate> </ListView> <TextBlock Text="{Binding Foo.Count}" /> <StackPanel Grid.Column="1"> <Views: FooView2 /> </StackPanel> <Grid> FooView1: <UserControl> <TextBlock Text="{Binding Foo.Title}"> </UserControl> FooView2: <UserControl> <TextBlock Text="{Binding Foo.Detail1}"> <TextBlock Text="{Binding Foo.Detail2}"> <TextBlock Text="{Binding Foo.Detail3}"> <TextBlock Text="{Binding Foo.Detail4}"> </UserControl> I've no IDE here. Excuse me if there is any syntax error When the user clicks on a button. These two usercontrols have to be replaced by another two usercontrols, so the datacontext changes, the main ui remaining the same. ie, FooView1 by BarView1 and FooView2 by BarView2 In short i want to bind this view changes in mainview to my command (command from Button) How can i do this? Also tell me if i could merge the usercontrol pairs, so that only one view exists for each viewmodel ie, FooView1 and FooView2 into FooView and so on...

    Read the article

  • What's the proper approach for writing multi-path "story" flows?

    - by Basiclife
    Hi, I wonder if you can help me. I'm writing a game (2d) which allows players to take multiple routes, some of which branch/merge - perhaps even loop. Each section of the game will decide which section is loaded next. I'm calling each section an IStoryElement - And I'm wondering how best to link these elements up in a way that is easily changed/configured and at the same time, graphable I'm going to have an engine/factory assembly which will load the appropriate StoryElement(s) based on various config options. I initially planned to give each StoryElement a NextElement() As IStoryElement property and a Completed() event. When the vent fires, the engine reads the NextElement property to find the next StoryElement. The downside to this is that if I ever wanted to graph all the routes through the game, I would be unable to - I couldn't determine all possible targets for each StoryElement. I considered a couple of other solutions but they all feel a little clunky - eg Do I need an additional layer of abstraction? ie StoryElementPlayers or similar - Each one would be responsible for stringing together multiple StoryElement perhaps a Series and a ChoicePlayer with each responsible for graphing its own StoryElement - But this will just move the problem up a layer. In short, I need some way of emulating a simple but dynamic workflow (but I'd rather not actually use WWF). Is there a pattern for something this simple? All the ones I've managed to find relate to more advanced control flow (parallel processing, etc.)

    Read the article

  • entity framework navigation property further filter without loading into memory

    - by cellik
    Hi, I've two entities with 1 to N relation in between. Let's say Books and Pages. Book has a navigation property as Pages. Book has BookId as an identifier and Page has an auto generated id and a scalar property named PageNo. LazyLoading is set to true. I've generated this using VS2010 & .net 4.0 and created a database from that. In the partial class of Book, I need a GetPage function like below public Page GetPage(int PageNumber) { //checking whether it exist etc are not included for simplicity return Pages.Where(p=>p.PageNo==PageNumber).First(); } This works. However, since Pages property in the Book is an EntityCollection it has to load all Pages of a book in memory in order to get the one page (this slows down the app when this function is hit for the first time for a given book). i.e. Framework does not merge the queries and run them at once. It loads the Pages in memory and then uses LINQ to objects to do the second part To overcome this I've changed the code as follows public Page GetPage(int PageNumber) { MyContainer container = new MyContainer(); return container.Pages.Where(p=>p.PageNo==PageNumber && p.Book.BookId==BookId).First(); } This works considerably faster however it doesn't take into account the pages that have not been serialized to the db. So, both options has its cons. Is there any trick in the framework to overcome this situation. This must be a common scenario where you don't want all of the objects of a Navigation property loaded in memory when you don't need them.

    Read the article

  • Msysgit bash is horrendously slow in Windows 7

    - by Kevin L.
    I love git and use it on OS X pretty much constantly at home. At work, we use svn on Windows, but want to migrate to git as soon as the tools have fully matured (not just TortoiseGit, but also something akin the really nice Visual Studio integration provided by VisualSVN). But I digress... I recently installed msysgit on my Windows 7 machine, and when using the included version of bash, it is horrendously slow. And not just the git operations; clear takes about five seconds. AAAAH! Has anyone experienced a similar issue? Edit: It appears that msysgit is not playing nicely with UAC and might just be a tiny design oversight resulting from developing on XP or running Vista or 7 with UAC disabled; starting Git Bash using Run as administrator results in the lightning speed I see with OS X (or on 7 after starting Git Bash w/o a network connection - see @Gauthier answer). Edit 2: AH HA! See my answer.

    Read the article

  • Bazaar offline + branches

    - by cheez
    I have a Bazaar repository on Host A with multiple branches. This is my main repository. Until now, I have been doing checkouts on my other machines and committing directly to the main repository. However, now I am consolidating all my work to my laptop and multiple VMs. I need to be working offline regularly. In particular, I need to create/delete/merge branches all while offline. I was thinking of continuing to have the master on Host A with a clone of the repository on the laptop with each vms doing checkouts of the clone. Then, when I go offline, I could do bzr unbind on the clone and bzr bind when I am back online. This failed as soon as I tried to bzr clone since bzr clone only clones a branch(!!!!) I need some serious help. If Hg would handle this better please let me know (I need Windows support.) However, at this moment I cannot switch from Bazaar as it is too close to some important deadlines. Thanks in advance!

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • Persist data when the table was not mapped (JPA EclipseLink)

    - by enrique
    Hi everybody I need some help in persisting data into a table that has not been mapped... The issue is that the database we have has a table which all of its columns are foreign keys so by mapping the whole database all of the tables are correctly mapped. However that table called "category" is not mapped. The way in which we can browse the data is by passing for the table I mentioned using the @jointable annotation which was set by the system in the other tables with which "category" has a relation. So we can go ahead and using the collections and perform a query. But the issue comes when I want to persist data into that table because there´s no any entity. We tried to persist through the collections but no luck. Then I have just tried by creating the entity with its PK and Facade all by hand. However when I try to persist using the Merge method the system tries to perform an Insert when it is supposed to perform an Update. So obviously it returns an error. Does anybody have an idea on this situation? Thanks.-

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >