Search Results

Search found 5511 results on 221 pages for 'maintenance operations'.

Page 28/221 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Flags with deferred use

    - by Trenton Maki
    Let's say I have a system. In this system I have a number of operations I can do but all of these operations have to happen as a batch at a certain time, while calls to activate and deactivate these operations can come in at any time. To implement this, I could use flags like doOperation1 and doOperation2 but this seems like it would become difficult to maintain. Is there a design pattern, or something similar, that addresses this situation?

    Read the article

  • Multiple vulnerabilities in libexif

    - by Umang_D
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-2812 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 libexif Solaris 11 11/11 SRU 12.4 CVE-2012-2813 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 CVE-2012-2814 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 7.5 CVE-2012-2836 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 CVE-2012-2837 Numeric Errors vulnerability 5.0 CVE-2012-2840 Numeric Errors vulnerability 7.5 CVE-2012-2841 Numeric Errors vulnerability 7.5 CVE-2012-2845 Numeric Errors vulnerability 6.4 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Maximizing Throughput with TVPs

    TVPs offer several performance optimization possibilities that other bulk operations do not allow, and these operations may allow for TVP performance to exceed other bulk operations by an order of magnitude, especially for a pattern where subsets of the data are frequently updated. Want to work faster with SQL Server?If you want to work faster try out the SQL Toolbelt. "The SQL Toolbelt provides tools that database developers as well as DBAs should not live without." William Van Orden. Download the SQL Toolbelt here.

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • "Optimal" game loop for 2D side-scroller

    - by MrDatabase
    Is it possible to describe an "optimal" (in terms of performance) layout for a 2D side-scroller's game loop? In this context the "game loop" takes user input, updates the states of game objects and draws the game objects. For example having a GameObject base class with a deep inheritance hierarchy could be good for maintenance... you can do something like the following: foreach(GameObject g in gameObjects) g.update(); However I think this approach can create performance issues. On the other hand all game objects' data and functions could be global. Which would be a maintenance headache but might be closer to an optimally performing game loop. Any thoughts? I'm interested in practical applications of near optimal game loop structure... even if I get a maintenance headache in exchange for great performance.

    Read the article

  • JSR updates

    - by heathervc
    There were many JSR updates over the last week.  See below. JSR 258, Mobile User Interface Customization API, has published a Maintenance Release. JSR 257, Contactless Communication API, has published a Maintenance Release 2. JSR 180, SIP API for J2ME, has published a Final Release 5. JSR 269, Pluggable Annotation Processing API, has published a Maintenance Release. JSR 344, JavaServerTM Faces 2.2, has published an Early Draft Review.  The review closes on 8 December.

    Read the article

  • How do I get to the maintenace shell?

    - by Narida
    I'm asking this because initially, my problem was this (power failure during installation, so I typed the following instructions in the maintenance shell: sudo mount -o remount,rw / sudo dpkg --configure a sudo mount -o remount,ro / sudo sync sudo reboot The first three lines worked, afterwards, my computer (a Dell Inspiron 530) got stalled for several hours, so I unplugged it. When I turned it on, the log in screen appeared, and after I try to write my password, it leads me back to the log screen. I must note that when I typed the first three lines during the maintenance shell mode, it said that the errors which were encountered during processing were: initscripts bluez gnome-bluetooth So, what do I have to do in order to get back to the maintenance shell so I can type code again? And, what code do I have to write in order to restore my computer? Thank you for your attention.

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • New Oracle BI Applications released

    - by THE
    Oracle has just released two new Applications for Oracle Business Intelligence Analytics with the Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} 7.9.6.x Extension Pack: Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Manufacturing Analytics, part of the Oracle BI Applications product family, helps discrete and process manufacturing organizations optimize their supply networks by integrating data from across the enterprise value chain, thereby enabling executives, operations managers, cost accountants and production supervisors to make informed and actionable decisions related to manufacturing execution. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Enterprise Asset Management Analytics, part of the Oracle BI Applications product family, offers complete and enhanced visibility to enterprise-wide maintenance information. Pre-built reports covering Maintenance History, Maintenance Cost Analysis and Maintenance Work Orders, provide Maintenance Managers information to maximize performance, identify potential issues much in advance, and address them before they escalate into serious problems.  More Information about the existing Business Intelligence Analytics Applications can be found on this page: http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html If you are not familiar with Oracle Manufacturing or Oracle Enterprise Asset Management, these PDFs might get you started: http://www.oracle.com/us/products/applications/060289.pdf http://www.oracle.com/us/products/applications/057127.pdf

    Read the article

  • C#/.NET Little Wonders: The Useful But Overlooked Sets

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>.  Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately. A Background From Math In mathematical terms, a set is an unordered collection of unique items.  In other words, the set {2,3,5} is identical to the set {3,5,2}.  In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2).  In addition, you can perform set arithmetic on sets such as: Intersections: The intersection of two sets is the collection of elements common to both.  Example: The intersection of {1,2,5} and {2,4,9} is the set {2}. Unions: The union of two sets is the collection of unique items present in either or both set.  Example: The union of {1,2,5} and {2,4,9} is {1,2,4,5,9}. Differences: The difference of two sets is the removal of all items from the first set that are common between the sets.  Example: The difference of {1,2,5} and {2,4,9} is {1,5}. Supersets: One set is a superset of a second set if it contains all elements that are in the second set. Example: The set {1,2,5} is a superset of {1,5}. Subsets: One set is a subset of a second set if all the elements of that set are contained in the first set. Example: The set {1,5} is a subset of {1,2,5}. If We’re Not Doing Math, Why Do We Care? Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation?  The answer is simple: they are extremely efficient ways to determine ownership in a collection. For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order.  Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for.  Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to. Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list.  If it is, great, we already have that market data feed!  If not, then and only then should we subscribe to the feed for that symbol. So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it.  This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead: 1: // class that handles are order processing service 2: public sealed class OrderProcessor 3: { 4: // contains list of all symbols we are currently subscribed to 5: private readonly List<string> _subscriptions = new List<string>(); 6:  7: ... 8: } Now whenever you are adding a new order, it would look something like: 1: public PlaceOrderResponse PlaceOrder(Order newOrder) 2: { 3: // do some validation, of course... 4:  5: // check to see if already subscribed, if not add a subscription 6: if (!_subscriptions.Contains(newOrder.Symbol)) 7: { 8: // add the symbol to the list 9: _subscriptions.Add(newOrder.Symbol); 10: 11: // do whatever magic is needed to start a subscription for the symbol 12: } 13:  14: // place the order logic! 15: } What’s wrong with this?  In short: performance!  Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection. (I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science). Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion.  Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.  Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection.  Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found. Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage.  So performing a sort after each add would probably add more time.  As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value. This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.  The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface.  As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering. HashSet<T> – For Unordered Storage of Sets When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>.  That is, a Dictionary where the TKey and TValue refer to the same object.  This is really an oversimplification, but logically it makes sense.  I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value. As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups!  Compare the times below between HashSet<T> and List<T>: Operation HashSet<T> List<T> Add() O(1) O(1) at end O(n) in middle Remove() O(1) O(n) Contains() O(1) O(n)   Now, these times are amortized and represent the typical case.  In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two. The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains!  This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection.  Compare this to the List where almost any add or remove must rearrange potentially all the elements!  And to find an item in the list (if unsorted) you must search every item in the List. So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection. And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well. All we have to do to switch from a List<T> to a HashSet<T>  is change our declaration.  Since List and HashSet support many of the same members, chances are we won’t need to change much else. 1: public sealed class OrderProcessor 2: { 3: private readonly HashSet<string> _subscriptions = new HashSet<string>(); 4:  5: // ... 6:  7: public PlaceOrderResponse PlaceOrder(Order newOrder) 8: { 9: // do some validation, of course... 10: 11: // check to see if already subscribed, if not add a subscription 12: if (!_subscriptions.Contains(newOrder.Symbol)) 13: { 14: // add the symbol to the list 15: _subscriptions.Add(newOrder.Symbol); 16: 17: // do whatever magic is needed to start a subscription for the symbol 18: } 19: 20: // place the order logic! 21: } 22:  23: // ... 24: } 25: Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>.  Thus, we can pick up the performance improvements in this case with minimal code changes. SortedSet<T> – Ordered Storage of Sets Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>. The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order.  Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use. So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally.  Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.  There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>.  Let’s compare the three: Operation HashSet<T> SortedSet<T> List<T> Add() O(1) O(log n) O(1) at end O(n) in middle Remove() O(1) O(log n) O(n) Contains() O(1) O(log n) O(n)   The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error.  There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity.  Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.  Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection.  This is very good performance!  It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing. Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point: Collection Size O(1) Visits O(log n) Visits O(n) Visits 1 1 1 1 10 1 4 10 100 1 7 100 1000 1 10 1000   Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count.  This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).  So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>. Unique Set Operations Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before: IntersectWith() – Performs the set intersection of two sets.  Modifies the current set so that it only contains elements also in the second set. UnionWith() – Performs a set union of two sets.  Modifies the current set so it contains all elements present both in the current set and the second set. ExceptWith() – Performs a set difference of two sets.  Modifies the current set so that it removes all elements present in the second set. IsSupersetOf() – Checks if the current set is a superset of the second set. IsSubsetOf() – Checks if the current set is a subset of the second set. For more information on the set operations themselves, see the MSDN description of ISet<T> (here). What Sets Don’t Do Don’t get me wrong, sets are not silver bullets.  You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for. Also sets don’t store temporal add-order.  That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it.  This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can. Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection.  Iterating over all the items in a List<T> is generally much, much faster than iterating over a set. Summary Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value.  In addition, if you have need for the mathematical set operations, the C# sets support those as well.  The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order.  In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.   Technorati Tags: C#,.Net,Little Wonders,BlackRabbitCoder,ISet,HashSet,SortedSet

    Read the article

  • WaterMark using c#

    - by srk
    I am working on a Kiosk application. There is a Maintenance mode in my application. When my application enters into Maintenance mode, i want to show the user a watermark "Maintenance Mode Commenced". I want this watermark to be shown through out my desktop. No matter what form is in focus. Is this possible ? Any ideas.... Note : This is Windows application using c#

    Read the article

  • How to make maintaince document for a website?

    - by metal-gear-solid
    How to make maintenance document for a website? I've created a site using XHTML ,CSS, jQuery etc. it's big site. Now i have to write a maintenance document for a site for if any changes comes in future related to design, content and functionality then those things will be handled by someone else. How and what should i keep in maintenance document.

    Read the article

  • If not following correct procedure?

    - by ct2k7
    Hi, I'm working on a Content Management System, and so far so good. I'm trying to get the system to work with a maintenance mode system. This is the code in the script to display the maintenance stuff: if ($maintenance['value'] == "1") {?> <div id="content"> <div id="errmsg"> <?php echo $maintenance['notes']; ?> </div> </div> <? } else {?> <h1><?php echo $title; ?></h1> <hr /><br /> <div id="content"> <?php echo $contents; if (!$content) { include ('includes/error/404.php');}?> </div> <? } ?> I can verify that the $maintenance['value'] variable is working as it should, but this portion o the script isn't working as it should. The value is currently set to 1, but it still displays the stuff in the else. Any ideas?

    Read the article

  • CRUD operations; do you notify whether the insert,update etc. went well ?

    - by danielovich
    Hi guys. I have a simple question for you (i hope) :) I have pretty much always used void as a "return" type when doing CRUD operations on data. Eg. Consider this code: public void Insert(IAuctionItem item) { if (item == null) { AuctionLogger.LogException(new ArgumentNullException("item is null")); } _dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item); _dataStore.DataContext.SubmitChanges(); } and then considen this code: public bool Insert(IAuctionItem item) { if (item == null) { AuctionLogger.LogException(new ArgumentNullException("item is null")); } _dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item); _dataStore.DataContext.SubmitChanges(); return true; } It actually just comes down to whether you should notify that something was inserted (and went well) or not ?

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • Agile Development Requires Agile Support

    - by Matt Watson
    Agile developmentAgile development has become the standard methodology for application development. The days of long term planning with giant Gantt waterfall charts and detailed requirements is fading away. For years the product planning process frustrated product owners and businesses because no matter the plan, nothing ever went to plan. Agile development throws the detailed planning out the window and instead focuses on giving developers some basic requirements and pointing them in the right direction. Constant collaboration via quick iterations with the end users, product owners, and the development team helps ensure the project is done correctly.  The various agile development methodologies have helped greatly with creating products faster, but not without causing new problems. Complicated application deployments now occur weekly or monthly. Most of the products are web-based and deployed as a software service model. System performance and availability of these apps becomes mission critical. This is all much different from the old process of mailing new releases of client-server apps on CD once per quarter or year.The steady stream of new products and product enhancements puts a lot of pressure on IT operations to keep up with the software deployments and adding infrastructure capacity. The problem is most operations teams still move slowly thanks to change orders, documentation, procedures, testing and other processes. Operations can slow the process down and push back on the development team in some organizations. The DevOps movement is trying to solve some of these problems by integrating the development and operations teams more together. Rapid change introduces new problemsThe rapid product change ultimately creates some application problems along the way. Higher rates of change increase the likelihood of new application defects. Delivering applications as a software service also means that scalability of applications is critical. Development teams struggle to keep up with application defects and scalability concerns in their applications. Fixing application problems is a never ending job for agile development teams. Fixing problems before your customers do and fixing them quickly is critical. Most companies really struggle with this due to the divide between the development and operations groups. Fixing application problems typically requires querying databases, looking at log files, reviewing config files, reviewing error logs and other similar tasks. It becomes difficult to work on new features when your lead developers are working on defects from the last product version. Developers need more visibilityThe problem is most developers are not given access to see server and application information in the production environments. The operations team doesn’t trust giving all the developers the keys to the kingdom to log in to production and poke around the servers. The challenge is either give them no access, or potentially too much access. Those with access can still waste time figuring out the location of the application and how to connect to it over VPN. In addition, reproducing problems in test environments takes too much time and isn't always possible. System administrators spend a lot of time helping developers track down server information. Most companies give key developers access to all of the production resources so they can help resolve application defects. The problem is only those key people have access and they become a bottleneck. They end up spending 25-50% of their time on a daily basis trying to solve application issues because they are the only ones with access. These key employees’ time is best spent on strategic new projects, not addressing application defects. This job should fall to entry level developers, provided they have access to all the information they need to troubleshoot the problems.The solution to agile application support is giving all the developers limited access to the production environment and all the server information they need to see. Some companies create their own solutions internally to collect log files, centralize errors or other things to address the problem. Some developers even have access to server monitoring or other tools. But they key is giving them access to everything they need so they can see the full picture and giving access to the whole team. Giving access to everyone scales up the application support team and creates collaboration around providing improved application support.Stackify enables agile application supportStackify has created a solution that can give all developers a secure and read only view of the entire production server environment without console or remote desktop access.They provide a web application that provides real time visibility to the important information that developers need to see. An application centric view enables them to see all of their apps across multiple datacenters and environments. They don’t need to know where the application is deployed, just the name of the application to find it and dig in to see more. All your developers can see server health, application health, log files, config files, windows event viewer, deployment history, application notes, and much more. They can receive email and text alerts when problems arise and even safely query your production databases.Stackify enables companies that do agile development to scale up their application support team by getting more team members involved. The lead developers can spend more time on new projects. Application issues can be fixed quicker than ever. Operations can spend less time helping developers collect server information. Agile application support starts with Stackify. Visit Stackify.com to learn more.

    Read the article

  • Interesting interview question. .Net

    - by rahul
    Coding Problem NumTrans There is an integer K. You are allowed to add to K any of its divisors not equal to 1and K. The same operation can be applied to the resulting number and so on. Notice that starting from the number 4, we can reach any composite number by applying several such operations. For example, the number 24 can be reached starting from 4 using 5 operations: 468121824 You will solve a more general problem. Given integers n and m, return the minimal number of the described operations necessary to transform n into m. Return -1 if m can't be obtained from n. Definition Method signature: int GetLeastCount (int n, int m) Constraints N will be between 4 and 100000, inclusive. M will be between N and 100000, inclusive. Examples 1) 4 576 Returns: 14 The shortest order of operations is: 468121827365481108162243324432576 2) 8748 83462 Returns: 10 The shortest order of operations is: 874813122196832624439366590497873283106834488346083462 3) 4 99991 Returns: -1 The number 99991 can't be obtained because it’s prime!

    Read the article

  • How to achieve this site structure?

    - by Sushant
    Hi, I need to develop a website that looks like this. In central administration however, in the operations tab, It shows Central Administration-- Operations. But I checked, operations is not a subsite. Then what is it. In my application, I always get Home-- Operations. To add to trouble,it changes the name at the top as Operations. I need to keep it central administration only. Please help me sort this out. Thanks.

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • T-SQL Tuesday #13: Clarifying Requirements

    - by Alexander Kuznetsov
    When we transform initial ideas into clear requirements for databases, we typically have to make the following choices: Frequent maintenance vs doing it once. As we are clarifying the requirements, we need to determine whether we want to concinue spending considerable time maintaining the system, or if we want to finish it up and move on to other tasks. Race car maintenance vs installing electric wiring is my favorite analogy for this kind of choice. In some cases we need to sqeeze every last bit...(read more)

    Read the article

  • Organizing Your Department with CMMS Software

    Computerized maintenance management (CMMS) software can help to organize any maintenance department no matter what the skill level of the employees are. One does not have to be a genius on the comput... [Author: Ashley Combs - Computers and Internet - April 01, 2010]

    Read the article

  • CMMS Download Can Make Life Easier

    Are you looking for a way to organize your maintenance department? Does your maintenance department need to run more efficiently? Would you like to be able to schedule the downtime for your equipment... [Author: Ashley Combs - Computers and Internet - April 11, 2010]

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >