Search Results

Search found 5638 results on 226 pages for 'scheduling algorithm'.

Page 112/226 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Is there a way to force ContourPlot re-check all the points on the each stage of it's recursion algorithm?

    - by Alexey Popkov
    Hello, Thanks to this excellent analysis of the Plot algorithm by Yaroslav Bulatov, I now understand the reason why Plot3D and ContourPlot fail to draw smoothly functions with breaks and discontinuities. For example, in the following case ContourPlot fails to draw contour x^2 + y^2 = 1 at all: ContourPlot[Abs[x^2 + y^2 - 1], {x, -1, 1}, {y, -1, 1}, Contours -> {0}] It is because the algorithm does not go deeply into the region near x^2 + y^2 = 1. It "drops" this region on an initial stage and do not tries to investigate it further. Increasing MaxRecursion does nothing in this sense. And even undocumented option Method -> {Refinement -> {ControlValue -> .01 \[Degree]}} does not help (but makes Plot3D a little bit smoother). The above function is just a simple example. In real life I'm working with very complicated implicit functions that cannot be solved analytically. Is there a way to get ContourPlot to go deeply into such regions near breaks and discontinuities?

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • What algorithm should I use for encrypting and embedding a password for an application?

    - by vfclists
    What algorithm should I use for encrypting and embedding a password for an application? It obviously is not bullet proof, but it should be good enough to thwart someone scanning the database with a hex editor, or make it hard for someone who has the skills to use a debugger to trace the code to work out, either by scanning for the encrypted password, or using a debugger to run through the decryption code. Object Pascal would be nice. /vfclists

    Read the article

  • QuickGraph - is there algorithm for find all parents (up to root vertex's) of a set of vertex's

    - by Greg
    Hi, In QuickGraph - is there algorithm for find all parents (up to root vertex's) of a set of vertex's. In other words all vertex's which have somewhere under them (on the way to the leaf nodes) one or more of the vertexs input. So if the vertexs were Nodes, and the edges were a depends on relationship, find all nodes that would be impacted by a given set of nodes. If not how hard is it to write one's own algorithms?

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • Implementing RSA-SHA1 signature algorithm in Java (creating a private key for use with OAuth RSA-SHA

    - by The Elite Gentleman
    Hi everyone, As you know, OAuth can support RSA-SHA1 Signature. I have an OAuthSignature interface that has the following method public String sign(String data, String consumerSecret, String tokenSecret) throws GeneralSecurityException; I successfully implemented and tested HMAC-SHA1 Signature (which OAuth Supports) as well as the PLAINTEXT "signature". I have searched google and I have to create a private key if I need to use SHA1withRSA signature: Sample code: /** * Signs the data with the given key and the provided algorithm. */ private static byte[] sign(PrivateKey key, String data) throws GeneralSecurityException { Signature signature = Signature.getInstance("SHA1withRSA"); signature.initSign(key); signature.update(data.getBytes()); return signature.sign(); } Now, How can I take the OAuth key (which is key = consumerSecret&tokenSecret) and create a PrivateKey to use with SHA1withRSA signature? Thanks

    Read the article

  • What do the ddx and ddy values do in this AABB ray intersect algorithm?

    - by Paz
    Does anyone know what the ddx and ddy values do in the AABB ray intersect algorithm? Taken from the following site http://www.blitzbasic.com/codearcs/codearcs.php?code=1029 (show below). Local txmin#,txmax#,tymin#,tymax# // rox, rdx are the ray origin on the x axis, and ray delta on the x axis ... y-axis is roy and rdy Local ddx# =1.0/(rox-rdx) Local ddy# =1.0/(roy-rdy) If ddx >= 0 txmin = (bminx - rox) * ddx txmax = (bmaxx - rox) * ddx Else txmin = (bmaxx - rox) * ddx txmax = (bminx - rox) * ddx EndIf If ddy >= 0 tymin = (bminy - roy) * ddy tymax = (bmaxy - roy) * ddy Else tymin = (bmaxy - roy) * ddy tymax = (bminy - roy) * ddy EndIf If ( (txmin > tymax) Or (tymin > txmax) ) Return 0 If (tymin > txmin) txmin = tymin If (tymax < txmax) txmax = tymax Local tzmin#,tzmax# Local ddz# =1.0/(roz-rdz) If ddz >= 0 tzmin = (bminz - roz) * ddz tzmax = (bmaxz - roz) * ddz Else tzmin = (bmaxz - roz) * ddz tzmax = (bminz - roz) * ddz EndIf If (txmin > tzmax) Or (tzmin > txmax) Return 0 Return 1

    Read the article

  • Please help, now I have a matrix, I want use Combination algorithm to generate a array for length 6

    - by user313429
    The first thanks a lot for your help , the following is my matrix, I want to implement combination algorithm between multiple arrays in LINQ for this matrix. int[,] cj = { { 10, 23, 16, 20 }, { 22, 13, 1, 33 }, { 7, 19, 31, 12 }, { 30, 14, 21, 4 }, { 2, 29, 32, 6 }, { 18, 26, 17, 8 }, { 25, 11, 5, 28 }, { 24, 3, 15, 27 } }; other: public static IEnumerable<IEnumerable<T>> Combinations<T>(this IEnumerable<T> elements, int k) { return k == 0 ? new[] { new T[0] } : elements.SelectMany((e, i) => elements.Skip(i + 1).**Combinations**(k - 1).Select(c => (new[] { e }).Concat(c))); } The above method has a error in my project, System.Collections.Generic.IEnumerable' does not contain a definition for 'Combinations' and no extension method 'Combinations' accepting a first argument of type 'System.Collections.Generic.IEnumerable' could be found (are you missing a using directive or an assembly reference? I use .Net Framework3.5, what is the reason it?

    Read the article

  • What programming language is used to design google algorithm?

    - by AKN
    It is known that google has best searching & indexing algorithm. The also have good relevancy. They are also quicker in getting down the latest results. All that's fine. What programming language (c, c++, java, etc...) & database (oracle, MySQL, etc...) they have used in achieving this. Since they have to manipulate with volume of data quickly and effectively. Though I'm not looking for their indepth architecture (if in case violates their company policies) an overview of all such things could be useful. Anybody please add you valuable suggestions and insight on this?

    Read the article

  • Is there an algorithm for determining how much daylight there is?

    - by Pharaun
    Is there a function/algorithm that allows me to input the latitude and the approximate orbital position of the earth in so that I can determine how long the sun is up? IE during the winter it would show that the sun is only up a few hours in the far north hemisphere. I did some basic Google search and didn't find much so I was thinking that I might have to do some trigonometry that would allow me to calculate how much the earth is inclined or not toward the sun then use that information along with the latitude to figure out how much sunshine a site would be getting.

    Read the article

  • Is there any algorithm for turning simple HAXE code into C/C++ code files?

    - by Ole Jak
    I have simple Haxe app like class Main { public static function main() { trace("hello world"); } } I know how to compile such app for windows (not as SWF but as app from pure C\C++ )(and you can see how here but be worned thay use hxcpp\0,4 ) The problem is - I do not want to compile app for Windows Vista or 7 or XP I want to get PURE C\C++ code (better in one place as one project) for for example compiling that code on windows mobile or where ever I want to. So is there any algorithm for turning simple HAXE code into C/C++ code files?

    Read the article

  • How to change password hashing algorithm when using spring security?

    - by harry
    I'm working on a legacy Spring MVC based web Application which is using a - by current standards - inappropriate hashing algorithm. Now I want to gradually migrate all hashes to bcrypt. My high level strategy is: New hashes are generated with bcrypt by default When a user successfully logs in and has still a legacy hash, the app replaces the old hash with a new bcrypt hash. What is the most idiomatic way of implementing this strategy with Spring Security? Should I use a custom Filter or my on AccessDecisionManager or …?

    Read the article

  • Recursive algorithm for coalescing / collapsing list of dates into ranges.

    - by Dycey
    Given a list of dates 12/07/2010 13/07/2010 14/07/2010 15/07/2010 12/08/2010 13/08/2010 14/08/2010 15/08/2010 19/08/2010 20/08/2010 21/08/2010 I'm looking for pointers towards a recursive pseudocode algorithm (which I can translate into a FileMaker custom function) for producing a list of ranges, i.e. 12/07/2010 to 15/07/2010, 12/08/2010 to 15/08/2010, 19/08/2010 to 20/08/2010 The list is presorted and de-deuplicated. I've tried starting from both the first value and working forwards, and the last value and working backwards but I just can't seem to get it to work. Having one of those frustrating days... It would be nice if the signature was something like CollapseDateList( dateList, separator, ellipsis ) :-)

    Read the article

  • Don&rsquo;t Miss &ldquo;Transform Field Service Delivery with Oracle Real-Time Scheduler&rdquo;

    - by ruth.donohue
    Field resources are an expensive element in the service equation. Maximizing the scheduling and routing of these resources is critical in reducing costs, increasing profitability, and improving the customer experience. Oracle Real-Time Scheduler creates cost-optimized plans and schedules for service technicians that increase operational efficiencies and improve margins. It enhances Oracle’s Siebel Field Service with real-time scheduling and dispatch capabilities that ensure service requests are allocated efficiently and service levels are honored. Join our live Webcast to learn how your organization can leverage Oracle Real-Time Scheduler to: Increase operational efficiency with real-time scheduling that enables field service technicians to handle more calls per day and reduce travel mileage Resolve issues faster with dynamic work flows that ensure you have the right technician with the right skill set for the right job Improve the customer experience with real-time planning that optimizes field technician routing, reduces customer wait times, and minimizes missed SLAs Date: Thursday, March 10, 2011 Time: 8:30 am PT / 11:30 am ET / 4:30 pm UK / 5:30 pm CET Click here to register now.   Technorati Tags: Siebel Field Service,Oracle Real-Time Scheduler

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >