Search Results

Search found 11862 results on 475 pages for 'gman save the unicorns'.

Page 84/475 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Evaluating mean and std as simulations are added

    - by Luca Cerone
    I have simulations that evaluate a certain value X. I run the simulations several times and save the value of X in a vector V. When all the runs have finished I evaluate the mean and standard deviation for the vector V. This approach works, but implies saving all the values for X. As my computer is quite old and with limited ram, I was wondering if there is a way to update the mean value M and the standard deviation S, knowing the value of X at the (n+1)-th run, and the values of M and S after n runs. How can I update the mean value and the standard deviation as simulations are added to the set? Please note that this is just a conceptual example, I don't save only one number X but thousands at each simulations, so I really have problems running a big number of runs if I have to keep all the past values into the memory.

    Read the article

  • Why won't my Internet stay connected?

    - by Aubrey
    I recently updated to Ubuntu 12.04, from Ubuntu 10.04, and my Internet, USB760 Novetel Verizon-Wireless Mobile Broadband, will connect after a minute of me starting the computer up. But after a while it will disconnect and the only way to reconnect it is to restart the computer. Even then, sometimes it won't work. I've also noticed that since I've upgraded the computer, the computer will randomly enter into Power Save mode, and then it will tell me to log back in. I've done nothing to provoke it, other than using the Internet. I was wondering, could the entering into Power Save mode and the Internet disconnecting be somehow connected? I've updated the computer every time it asks me to do so, but it doesn't seem to be helping. If Ubuntu 10.04 would still be supported by next year, I would downgrade. But I have no other choice than to stick with 12.04. Any help would be greatly appreciated! Thanks!

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • How can I view an R32G32B32 texture?

    - by bobobobo
    I have a texture with R32G32B32 floats. I create this texture in-program on D3D11, using DXGI_FORMAT_R32G32B32_FLOAT. Now I need to see the texture data for debug purposes, but it will not save to anything but dds, showing the error in debug output, "Can't find matching WIC format, please save this file to a DDS". So, I write it to DDS but I can't open it now! The DirectX texture tool says "An error occurred trying to open that file". I know the texture is working because I can read it in the GPU and the colors seem correct. How can I view an R32G32B32 texture in an image viewer?

    Read the article

  • Domain Models (PHP)

    - by Calum Bulmer
    I have been programming in PHP for several years and have, in the past, adopted methods of my own to handle data within my applications. I have built my own MVC, in the past, and have a reasonable understanding of OOP within php but I know my implementation needs some serious work. In the past I have used an is-a relationship between a model and a database table. I now know after doing some research that this is not really the best way forward. As far as I understand it I should create models that don't really care about the underlying database (or whatever storage mechanism is to be used) but only care about their actions and their data. From this I have established that I can create models of lets say for example a Person an this person object could have some Children (human children) that are also Person objects held in an array (with addPerson and removePerson methods, accepting a Person object). I could then create a PersonMapper that I could use to get a Person with a specific 'id', or to save a Person. This could then lookup the relationship data in a lookup table and create the associated child objects for the Person that has been requested (if there are any) and likewise save the data in the lookup table on the save command. This is now pushing the limits to my knowledge..... What if I wanted to model a building with different levels and different rooms within those levels? What if I wanted to place some items in those rooms? Would I create a class for building, level, room and item with the following structure. building can have 1 or many level objects held in an array level can have 1 or many room objects held in an array room can have 1 or many item objects held in an array and mappers for each class with higher level mappers using the child mappers to populate the arrays (either on request of the top level object or lazy load on request) This seems to tightly couple the different objects albeit in one direction (ie. a floor does not need to be in a building but a building can have levels) Is this the correct way to go about things? Within the view I am wanting to show a building with an option to select a level and then show the level with an option to select a room etc.. but I may also want to show a tree like structure of items in the building and what level and room they are in. I hope this makes sense. I am just struggling with the concept of nesting objects within each other when the general concept of oop seems to be to separate things. If someone can help it would be really useful. Many thanks

    Read the article

  • How To Use Bash History to Improve Your Command-Line Productivity

    - by YatriTrivedi
    Whether you’re new to the Linux command-line or you’re a seasoned veteran, these tricks will help turn your text-based meanderings into full-blown marathons. Save time, speed up your productivity, and enhance your Linux-Fu, all at once! Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform] Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science]

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • MSFT new trick to promote IE9 by kill IE6 first.

    - by anirudha
    Every developer know every issue on development for IE6 whenever they know things more. they are frustrated whenever they spent time in IE6 for making application cross browser compatible. not long time ago MSFT make a campaign save IE6 you can find the reference http://blogs.msdn.com/b/anna/archive/2009/04/01/save-internet-explorer-6.aspx and the webstite is here http://www.saveie6.com/ well they really make joke see what they write on the page. well why website maked in PHP whenever they can make them in asp.net or any other technology who reflect the Microsoft technology see here  http://www.saveie6.com/compare.php High security (many updates) :- you can find IE6 is how much secure you can also read Wikipedia for know. well i can say IE6 is very easily to hack. wikipedia tell you about that here http://en.wikipedia.org/wiki/Internet_Explorer_6 and for know about the security watch here http://www.google.co.in/webhp?hl=en#sclient=psy&hl=en&site=webhp&q=ie6+security+issues Lightweight (no support for silly PNG transparency, etc) :- well they tell PNG silly but tell me about the best format on internet. their is no better option as png or SVG. More screen space thanks to no tabs:-  they tell this nonsense without think anything. if they really care about more screen space why they make tab  in 7,8,9. conclusion:- IE team make a research on how to promote IE9 better then they can beat chrome and Firefox. because IE9 not have anything good like customization , plug-in ,add-ons , personas , themes and many other thing like chrome and Firefox provided perhaps IE is outdated thing even everyone their can writing about these days that IE9 have this, have performance better then this… the main problem in IE is IE6. many developer hate them because many of their time goes for making site cross browser compatible. in 2009 they still have no blah like IE9 who they have today so they make a campaign for save IE6. the list they make is a joke. they show that everything in IE6 is perfect even everyone know the truth. they listed IE6 is high security. in 2011 their is a problem for IE9 promotion called IE6. because developer hate IE6 how they can promote IE9 very well. so destroy IE6 is only option for IE9 make promote better. so you can see they make two different different campaign and both are opposite of other. well  how we can believe in IE9. thanks for reading this post. what you thinking on it. have a idea or feedback reported them.

    Read the article

  • The Best Websites for Creating and Sending Free eCards

    - by Lori Kaufman
    With the holiday season upon us, it’s time to pull out the holiday card list and get writing. However, how would you like to save some money this year and also help save the environment? We’ve assembled a list of websites that allow you to create electronic cards (eCards) you can send (using email, Facebook, or other electronic delivery methods) to friends and family for the holidays, or for any other occasion. Each site listed provides free eCards you can send or has a free option, as well as a paid option. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Walmart's Mobile Self-Checkout

    - by David Dorf
    Reuters recently reported that Walmart was testing an iPhone-based self-checkout at a store near its headquarters.  Consumers scan items as they're placed in the physical basket, then the virtual basket is transferred to an existing self-checkout station where payment is tendered.  A very solid solution, but not exactly original. Before we go further, let's look at the possible cost savings for Walmart.  According to the article: Pushing more shoppers to scan their own items and make payments without the help of a cashier could save Wal-Mart millions of dollars, Chief Financial Officer Charles Holley said on March 7. The company spends about $12 million in cashier wages every second at its Walmart U.S. stores. Um, yeah. Using back-of-the-napkin math, I calculated Walmart's cashiers are making $157k per hour.  A more accurate statement would be saving $12M per year for each second saved on the average transaction time.  So if this self-checkout approach saves 2 seconds per transaction on average, Walmart would save $24M per year on labor.  Maybe.  Sometimes that savings will be used to do other tasks in the store, so it may not directly translate to less employees. When I saw this approach demonstrated in Sweden, there were a few differences, which may or may not be in Walmart's plans.  First, the consumers were identified based on their loyalty card.  In order to offset the inevitable shrink, retailers need to save on labor but also increase basket size, typically via in-aisle promotions.  As they scan items, retailers should target promos, and that's easier to do if you know some shopping history.  Last I checked, Walmart had no loyalty program. Second, at the self-checkout station consumers were randomly selected for an audit in which they must re-scan all the items just like you do at a typical self-checkout.  If you were found to be stealing, your ability to use the system can be revoked.  That's a tough one in the US, especially when the system goes wrong, either by mistake or by lying.  At least in my view, the Swedes are bit more trustworthy than the people of Walmart. So while I think the idea of mobile self-checkout has merit, perhaps its not right for Walmart.

    Read the article

  • Webcast Series: Accelerate Business-Critical Database Deployments with Oracle Optimized Solutions

    - by ferhat
    Join us for this two-part Webcast series and learn how to safely consolidate business-critical databases and deliver quantifiable benefits to the business: Save up to 75% in operational and acquisition costs Save millions of dollars consolidating legacy infrastructure Leverage best practices from thousands of customer environments Increase end user productivity with 75% faster time to operations and 4x faster throughput   The Oracle Optimized Solution for Oracle Database  provides extensive guidelines for architecting and deploying complete database solutions that deliver superior performance and availability while minimizing cost and risk. Oracle’s world-class engineering teams work together to define these optimal architectures using Oracle's powerful SPARC M-Series and SPARC T-Series servers together with Oracle Solaris and Oracle's SAN, NAS, and flash-based storage to run the industry-leading Oracle Database. Quite simply, the Oracle Optimized Solution for Oracle Database makes it easier for you to deliver and manage business critical database environments that are fast, secure and cost-effective. Available On-Demand PART 1: Why Architecture Matters When Deploying Business-Critical Databases PART 2: How To Consolidate Databases Using Oracle Optimized Solutions   Presented by: Lawrence McIntosh, Principal Enterprise Architect, Oracle Optimized Solutions Ken Kutzer, Principal Product Manager, Infrastructure Solutions, Oracle  

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • ClearTrace Shows Execution History

    - by Bill Graziano
    The latest release of ClearTrace (Build 38) now shows the execution history of a particular statement. You’ll need to save the trace files to a trace group instead of just using the default.  That’s as easy as typing something into the trace group name when you upload the trace.  I usually put the server name in this field. Build 38 also re-enables support for statement level events.  If your trace includes RPC:StmtCompleted or SQL:StmtCompleted events those will be processed and save.  In the results tab you can choose to view statement level or batch level events.  Please note that saving statement level events in a trace can generate HUGE trace files very quickly.

    Read the article

  • I have a stacktrace and limit of 250 characters for a bug report

    - by George Duckett
    I'm developing an xbox indie game and as a last-resort I have a try...catch encompassing everything. At this point if an exception is raised I can get the user to send me a message through the xbox however the limit is 250 characters. How can I get the most value out of my 250 characters? I don't want to do any encoding / compressing at least initially. Any solution to this problem could be compressed if needed as a second step anyway. I'm thinking of doing things like turning this: at EasyStorage.SaveDevice.VerifyIsReady() at EasyStorage.SaveDevice.Save(String containerName, String fileName) into this (drop the repeated namespace/class and method parameter names): at EasyStorage.SaveDevice.VerifyIsReady() at ..Save(String, String) Or maybe even just including the inner-most method, then only line numbers up the stack etc. TL;DR: Given an exception with a stacktrace how would you get the most useful debugging infromation out of 250 characters? (It will be a .net exception/stacktrace)

    Read the article

  • Image Collector Rips Web Page Images to Your Dropbox Account

    - by Jason Fitzpatrick
    Chrome: Image Collector is a simple Chrome extension that rips the images on the page you’re visiting to your Dropbox (or Google Drive) accounts. Just click the icon, uncheck any images you don’t want it to download, and click save. You can, technically, modify the script to download the images directly to your hard drive, but modifying it was a bit of a hassle and the default save-to-Dropbox action is so smooth we saw little reason to do so. Hit up the link below to grab a free copy. Image Collector [via Freeware Genuis] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • Drag and drop fearture for a website

    - by gpuguy
    I have to design a website which will have drag and drop features for creating an e-card. So you select items from a tool box and drag and drop this item on the card area. Once you have completed the design you can publish the e-card on the web by clicking "Save and publish" button. What are the possible technologies for implementing this feature? The requirement is that the application should not degrade the performance of the website, and should not take much time in publishing once the user click "Save and publish" button.

    Read the article

  • Saving a list of points into a text file

    - by dylanisawesome1
    I recently posted a question about this, but was not really sure where to go. I've gotten some progress, and have generated some simple noise here: http://pastie.org/5408655 That works well enough for me, but I would really like to be able to save the points into an ascii text file. currently it's formatted so that something like this: http://pastie.org/5409311 would create a square. I need to save in this format with the points(and lines connecting them) generated in the method above. Essentially, I need to write the array of points created in the first example to a text file formatted like the second example.

    Read the article

  • DataSets and XML - The Simplistic Approach

    One of the first ways I learned how to read xml data from external data sources was by using a DataSet’s ReadXML function. This function takes file path for an XML document and then converts it to a Dataset. This functionality is great when you need a simple way to process an XML document.  In addition the DataSet object also offers a simple way to save data in an xml format by using the WriteXML function. This function saves the current data in the DataSet to an XML file to be used later. DataSet ds  = New DataSet();String filePath = “http://www.yourdomain.com/someData.xml”;String fileSavePath = “C:\Temp\Test.xml”//Read file for this locationds.readxml(filePath);//Save file to this locationds.writexml(fileSavePath); I have used the ReadXML function before when consuming data from external Rss feeds to display on one of my sites.  It allows me to quickly pull in data from external sites with little to no processing. Example site: MyCreditTech.com

    Read the article

  • OpenWorld hands on labs: HOL9558, HOL9559 and HOL9870

    - by cpauliat
    In the upcoming event Oracle OpenWorld that will start in 2 days in San Francisco, Olivier Canonge, Simon Coter, Eric Bezille and I will run 3 hands on lab about Cloud using Oracle VM for X86 virtualization tool (details below) For each lab, a detailed document (in PDF format) explains all steps. If you don't have the opportunity to attend OpenWorld labs sessions, you can still run the labs at home or office using those documents. Lab 9558: Deploying Infrastructure as a Service (IaaS) with Oracle VM Session ID: HOL9558 Tuesday October 2nd, 2012, 10:15am – 11:15amLocation: Marriott Marquis - Salon 14/15PDF Document: part1 part2 part3 (right click and save link for each part then use winzip on file .001 to extract the PDF doc from the 3 zip files) Lab 9559: Virtualize and Deploy Oracle Applications Using Oracle VM Templates Session ID: HOL9559 Tuesday October 2nd, 2012, 11:45am – 12:45pmLocation: Marriott Marquis - Salon 14/15 PDF Document Lab 9870: x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL 9870 Wednesday, 3 Oct, 2012, 5:00 PM - 6:00 PMLocation: Marriott Marquis - Salon 14/15 PDF Document: part1 part2 part3 (right click and save link for each part then use winzip on file .001 to extract the PDF doc from the 3 zip files)

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >