Search Results

Search found 7387 results on 296 pages for 'master detail'.

Page 44/296 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • JPA @Version behaviour

    - by Albert Kam
    Hello, im using JPA2 with Hibernate 3.6.x I have made a simple testing on the @Version. Let's say we have 2 entities, Entity Team has a List of Player Entities, bidirectional relationship, lazy fetchtype, cascade-type All Both entities have @Version And here are the scenarios : Whenever a modification is made to one of the team/player entity, the team/player's version will be increased when flushed/commited (version on the modified record is increased). Adding a new player entity to team's collection using persist, the entity the team's version will be assigned after persist (adding a new entity, that new entity will got it's version). Whenever an addition/modification/removal is made to one of the player entity, the team's version will be increased when flushed/commited. (add/modify/remove child record, parent's version got increased also) I can understand the number 1 and 2, but the number 3, i dont understand, why the team's version got increased ? And that makes me think of other questions : What if i got Parent <- child <- granchildren relation ship. Will an addition or modification on the grandchildren increase the version of child and parent ? In scenario number 2, how can i get the version on the team before it's commited, like perhaps by using flush ? Is it a recommended way to get the parent's version after we do something to the child[s] ? Here's a code sample from my experiment, proving that when ReceivingGoodDetail is the owning side, and the version got increased in the ReceivingGood after flushing. Sorry that this use other entities, but ReceivingGood is like the Team, ReceivingGoodDetail is like the Player. 1 ReceivingGood/Team, many ReceivingGoodDetail/Player. /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, .. too long Hibernate: select product0_.id as id0_4_, product0_.creationDate as creation2_0_4_, .. too long before persisting the new detail, version of header is : 14 persisting the detail 1c9f81e1-8a49-4189-83f5-4484508e71a7 printing the size of the header : Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, .. too long 7 after persisting the new detail, version of header is : 14 Hibernate: insert into ReceivingGoodDetail (creationDate, modificationDate, usercreate_id, usermodify_id, version, buyQuantity, buyUnit, internalQuantity, internalUnit, product_id, receivinggood_id, supplierLotNumber, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? after flushing, version of header is now : 15 */ public void addDetailWithoutTouchingCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // create a new detail ReceivingGoodDetail receivingGoodDetailCumi = new ReceivingGoodDetail(); receivingGoodDetailCumi.setBuyUnit("Drum"); receivingGoodDetailCumi.setBuyQuantity(1L); receivingGoodDetailCumi.setInternalUnit("Liter"); receivingGoodDetailCumi.setInternalQuantity(10L); receivingGoodDetailCumi.setProduct(getProduct("b3e83b2c-d27b-4572-bf8d-ac32f6de5eaa")); receivingGoodDetailCumi.setSupplierLotNumber("Supplier Lot 1"); decorateEntity(receivingGoodDetailCumi, getUser("3978fee3-9690-4377-84bd-9fb05928a6fc")); receivingGoodDetailCumi.setReceivingGood(receivingGood); System.out.println("before persisting the new detail, version of header is : " + receivingGood.getVersion()); // persist it System.out.println("persisting the detail " + receivingGoodDetailCumi.getId()); em.persist(receivingGoodDetailCumi); System.out.println("printing the size of the header : "); System.out.println(receivingGood.getDetails().size()); System.out.println("after persisting the new detail, version of header is : " + receivingGood.getVersion()); em.flush(); System.out.println("after flushing, version of header is now : " + receivingGood.getVersion()); }

    Read the article

  • What is the best way to change the replication scheme of 2 currently replicated slaves?

    - by mmattax
    I have MySQL replication set up in production as follows: DB1 - DB2 DB1 - BAK Where DB2 and BAK are slaves to DB1. All 3 servers are in sync (0 seconds behind the master) and have 30+ GB of data. I'd like to put the servers in a new master-slave configuration as follows: DB1 - DB2 - BAK What is the best way to change the master host on BAK? Is there a way to avoid having to stop the slave thread on DB2 and getting a mysqldump for BAK (a 5-6 hour processes) ?

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

  • jsf, richfaces, popup window

    - by Hubidubi
    Hi I would like to make a list-detail view with richfaces. There will be a link for every record in the list that should open a new window containing record details. I tried to implement the link this way: <a4j:commandLink oncomplete="window.open('/pages/serviceDetail.jsf','popupWindow', 'dependent=yes, menubar=no, toolbar=no, height=500, width=400')" actionListener="#{monitoringBean.recordDetail}" value="details" /> I use <a4j:keepAlive beanName="monitoringBean" ajaxOnly="false" /> for both the list and the detail page. recordDetail method fills the data of the selected record to a variable of the bean that I would like to display on the detail page. The problem is that keepalive doesn't work, so I get new bean instance on the detail page every time. So the the previously selected record from the other bean is not accessible here. Is there a way to pass parameter (id) to the detail page to handle record selection. Or is there any way to make keepalive work? (I this this would be the easiest). Thanks

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • Do we need seperate file path for window and linux in java

    - by Kishor Sharma
    I have a file on linux ubuntu server hosted with path name /home/kishor/project/detail/. When I made a web app in window to upload and download file from specified location i used path "c:\kishor\projects\detail\" for saving in window. For my surprise when i used window file path name in my server i am still able to get files and upload them, i.e, "c:\kishor\projects\detail\". Can anyone explain why it is working (as window and linux both use different file path pattern).

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • typedef declaration syntax

    - by mt_serg
    Some days ago I looked at boost sources and found interesting typedef. There is a code from "boost\detail\none_t.hpp": namespace boost { namespace detail { struct none_helper{}; typedef int none_helper::*none_t ; } // namespace detail } // namespace boost I didn't see syntax like that earlier and can't explain the sense of that. This typedef introduces name "none_t" as pointer to int in boost::detail namespace. What the syntax is? And what difference between "typedef int none_helper::*none_t" and for example "typedef int *none_t" ?

    Read the article

  • asp.net Can I force every page to inherit from a base page? Also should some of this logic be in my master page?

    - by Bex
    Hi! I have a web app that has a base page. Each page needs to inherit from this base page as it contains properties they all need as well as dealing with the login rights. My base page has some properties, eg: IsRole1, IsRole2, currentUserID, Role1Allowed, Role2Allowed. On the init of each page I set the properties "Role1Allowed" and "Role2Allowed" Private Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init Role1Allowed = True Role2Allowed= False End Sub The basepage then decides if the user needs redirecting. 'Sample code so not exactly what is going to be, bug gives the idea Protected Overridable Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) If Role1Allowed And Not Role1 Then 'Redirect somewhere End If End Sub The page then must override this pageload if they need anything else in it, but making sure they call the base pageload first. Protected Overrides Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load MyBase.Page_Load(sender, e) If Not IsPostBack Then BindGrid() End If End Sub The other properties (IsRole1, IsRole, currentUserID) are also accessible by the page so it can be decided if certain things need doing based on the user. (I hope this makes sense) Ok so I have 2 questions Should this functionality be in the base page or should it somehow be in the master, and if so how would I get access to all the properties if it was? As there are multiple people working on this project and creating pages some are forgetting to inherit from this basepage, or call the base pageload when overriding it. Is there any way to force them to do this? Thanks for any help. bex

    Read the article

  • Start Daemonised GNU Screen from script a allow calling script to end

    - by tez
    I have a script on an embedded device that calls screen to start if a user logs in via a ssh session... #!/bin/sh SCREENRUNNING=`pgrep SCREEN` if [ -z "$SCREENRUNNING" ]; then echo "Screen not running so let's start the Master session sleep 2 screen -dmS Master sleep 2 screen -x root/Master else echo "Screen is already running let's connect to existing session" sleep 2 screen -x root/Master fi However this keeps the calling script active till the screen session exits,even if it's detached. What I want to do is have the calling script finish and exit while the screen session stays active. I've tried daemonising the screen -x lines and adding an & to the end of the screen -x lines neither of which work properly. Ideas?

    Read the article

  • BIND: forward 1st level zone

    - by raven
    First of all: sorry for the language, English is not my primary language. I have star-like DNS structure with many filials (more that 2): ^ | v filialNS_1.filial_1.city.local <---- ns.main.city.local <---- filialNS_2.filial_2.city.local ^ | v ns.mail.city.local is slave of all filials zones filialNS_1 is master of filial_1.city.local filialNS_2 is master of filial_2.city.local filialNS_N is master of filial_N.city.local I want to: serve DNS queries for xxx.filial_N.city.local with filialNS_N.filial_N.city.local forward all queries for xxx.xxx.xxx.local from filialNS_N to ns.main.city.local forward other queries to our provider's DNS on filial (or google-public-dns or anything else) FILIAL CONFIG named.conf zone "filial_1.city.local" { type master; file "/etc/namedb/dynamic/filial_1.city.local"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "2.76.10.in-addr.arpa" { type master; file "/etc/namedb/dynamic/2.76.10.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "local." { type forward; forward only; forwarders { <ns.main.city.local IP address> }; }; nslookup server.filial_1.city.local - works fine nslookup server.main.city.local Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find server.main.city.local: NXDOMAIN Where am I going wrong?

    Read the article

  • Can you help me understand my SATA/RAID options?

    - by andrz_001
    I've a gigabyte GA-M720-US3 motherboard. Recently, I noticed the following during boot: IDE channel 0 Master (none) IDE channel 0 Slave (none) IDE channel 2 Master (my hdd) IDE channel 2 Slave (my dvd drive) IDE channel 3 Master (none) IDE channel 3 Slave (none) Of course, the same information is contained in the BIOS/CMOS. The HDD is connected to the mobo via a SATA(2?) cable at the port(?) labeled SATA2_0. The DVD drive is connected by a similar cable at SATA2_1. Why doesn't the information displayed during the boot and in BIOS reflect how I plugged the cables in? I mean, why "none" for channel 0 when there is something in SATA2_0. (or is that serious naivete on my part!?) Where's Channel 1 master and slave? Since these are SATA cables and not the IDE ribbons from a time ago, why the whole master/slave declaration during boot and in BIOS? Should my BIOS reflect the fact that these are SATA cables? I mean, in BIOS, should the "Onchip SATA mode IDE" be set to RAID or AHCI instead of IDE? Any replies, answers, suggestions, links, tips will be much appreciated. Thank you in advance!

    Read the article

  • What is the fastest method to restore MySQL replication?

    - by dwhere
    I have a MySQL (5.1) master-slave replication pair and replication to the slave has failed. It failed because the master ran out of disk space and the relay-logs became corrupt. The master is now back online and working properly. Since there is this error in the log the slave process can't simply be restarted. The server has a single 40GB InnoDB database and I would like to know what is the fastest method for getting the slave back in sync to minimize downtime.

    Read the article

  • Want to save data field from form into two columns of two models.

    - by vette982
    I have a Profile model with a hasOne relationship to a Detail model. I have a registration form that saves data into both model's tables, but I want the username field from the profile model to be copied over to the usernamefield in the details model so that each has the same username. function new_account() { if(!empty($this->data)) { $this->Profile->modified = date("Y-m-d H:i:s"); if($this->Profile->save($this->data)) { $this->data['Detail']['profile_id'] = $this->Profile->id; $this->data['Detail']['username'] = $this->Profile->username; $this->Profile->Detail->save($this->data); $this->Session->setFlash('Your registration was successful.'); $this->redirect(array('action'=>'index')); } } } This code in my Profile controller gives me the error: Undefined property: Profile::$username Any ideas?

    Read the article

  • Reporting Services Sum of Inner Group in Outer Group

    - by Spoonybard
    I have a report in Reporting Services 2008 using ASP.net 3.5 and SQL Server 2008. The report has 2 groupings and a detail row. This is the current format: Outer Group Inner Group Detail Row The Detail Row represents an item on a receipt and a receipt can have multiple items. Each receipt was paid with a certain payment method. So the Outer Group is grouped by payment type, the Inner Group is grouped by the receipt's ID, and the Detail Row is each item for the given receipt. My raw data result set has two important columns: The Amount Received and the Amount Applied. The Amount Received is how much money in total was collected for all the items on the receipt. The Amount Applied is how much money each item got from the total Amount Received. Sample Result Set: ReceiptID Item ItemID AmountReceived AmountApplied Payment Method ------------------------------------------------------------------------------------------ 1 Book 1 $200.00 $40.00 Cash 1 CD 2 $200.00 $20.00 Cash 1 Software 3 $200.00 $100.00 Cash 1 Backpack 4 $200.00 $40.00 Cash The Inner Group displays the AmountReceived correctly as $200. However, the Outer Group displays the AmountReceived as $800, because I believe that it is going off each detail row which in this case is a count of 4 items. What I want is to see in the Outer Group that the Amount Received is $200. I tried restricting the scope in my SUM function to be the Inner Group, but I get the error "The scope parameter must be set to a string constant that is equal to either the name of a containing group, the name of a containing data region, or the name of a dataset." Does anyone have any suggestions on how to solve this issue? Thanks.

    Read the article

  • Múltiples clientes en el mismo Oracle UCM

    - by [email protected]
    Estamos muy activos con la implantación de plataformas ECM que den servicio a múltiples clientes. Consiguiendo 2 objetivos muy importantes:El cliente final puede pagar al proveedor por una plataformas ECM como servicio (SaaS). Y, lógicamente, se ahorra en complejidad y gastos de infraestructura, administración, formación, almacenamiento, etc...Hemos estado explicando estos días el modelo Master-Proxy de Oracle UCM con el que podemos implantar este tipo de plataformas. No siempre será la solución más adecuada porque a veces vamos a querer disponer de plataformas compartidas, pero con clientes completamente aislados. Siempre, la consola de migración nos permite exportar e importar componentes, metadatos, contenidos, workflows, etc... para que elijamos el modelo más adecuado para cada caso.Pero, ¿Cómo funciona?. Podéis ver en la imágen que se basa en la instalación de varias instancias de UCM y configurarlas de forma que varias de ellas se comporten como "Master" (digo varias para conseguir alta disponibilidad), y el resto se comporten como "Proxy" (también varias instancias de UCM pueden comportarse como un mismo "Proxy" permitiendo balancear la carga en función de que cada cliente requiera más o menos rendimiento). Esta configuración (que vemos en la imágen adjunta), nos permite:Delegar la gestión de usuarios de cada cliente. Los usuarios del Master podrán acceder a todos los Proxies, pero los usuarios de cada proxy sólo acceden a su repositorio.Delegar funcionalidad y componentes. Es posible configurar diferentes funcionalidades en cada proxy de forma que algunos servidores estén especializados en Web Content Management, otros en Document Management (por ejemplo).Diferentes modelos de metadatos. Podemos modelar unos tipos documentales generales para toda la plataforma y otros particulares diferentes en cada UCM "Proxy".Conseguir una centralización de búsquedas y acceso a repositorios de documentación con diferentes juegos de caracteres. Un UCM Master puede centralizar la búsqueda en UCM's proxy que alberguen documentación en diferentes juegos de caracteres (por ejemplo un UCM para documentación de idiomas "Western European" (inglés, español, francés, alemán,...) y otro UCM proxy bajo juego de caracteres "Asian" (japones, coreano, chino,...).Fuentes:Toda la información detallada se encuentra en la documentación de Oracle UCM, aquí:http://download.oracle.com/docs/cd/E10316_01/ouc.htmY en concreto, lo relativo a plataformas, en el documento "Planning and Implementation Guide", aquí:plan_implement_guide_10en.pdf

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • MaaS minimum requirements with juju-jitsu?

    - by Christopher Shen Mu Long
    I've browsed through so many different sites and found so much contradictory information. As I am getting tired of this and do belive this question affects many other users, so I would like to collect the "once and for all times" answer. Unfortunately, the documentation on MaaS and Juju is ... well, not the best, sorry to say that. What are the minimum system requirements for setting up a MaaS cluster, which is going to be orchestrated with juju-jitsu? Do they need to have the exact system specifications or can I just combine different hardware? What are the minimum requirements for the master machine? E.g. "You need at least 8GB of RAM, a dual core CPU with at least 3.0 GHz." How many machines to I need to deploy MaaS on? I've read six machines, nine machines, and so on. I clearly want to know: "You need one for the Master and e.g. five nodes." Do I need to attach as many NICs (network interface cards) to my master machine as there are nodes, or can I simply attach two NICs and a switch? One NIC for connecting to the internet, one for handling the MaaS tasks, connected to a switch, which connects my nodes to the master? Is Juju now ready for local deployment? The last time I experimented with juju and had to reboot my machine, the services orchestrated by juju were gone. This was an issue I also found on the official juju site. Unfortunately, as mentioned above, the documentation is not the best, so I could not find the necessary info on that again. So: Can I use juju on a local environment or will a reboot break my setup?

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • Maintaining Two Separate Software Versions From the Same Codebase in Version Control

    - by Joseph
    Let's say that I am writing two different versions of the same software/program/app/script and storing them under version control. The first version is a free "Basic" version, while the second is a paid "Premium" version that takes the codebase of the free version and expands upon it with a few extra value-added features. Any new patches, fixes, or features need to find their way into both versions. I am currently considering using master and develop branches for the main codebase (free version) along side master-premium and develop-premium branches for the paid version. When a change is made to the free version and merged to the master branch (after thorough testing on develop of course), it gets copied over to the develop-premium branch via the cherry-pick command for more testing and then merged into master-premium. Is this the best workflow to handle this situation? Are there any potential problems, caveats, or pitfalls to be aware of? Is there a better branching strategy than what I have already come up with? Your feedback is highly appreciated! P.S. This is for a PHP script stored in Git, but the answers should apply to any language or VCS.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >