Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 9/211 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Can this method to convert a name to proper case be improved?

    - by Kelsey
    I am writing a basic function to convert millions of names (one time batch process) from their current form, which is all upper case, to a proper mixed case. I came up with the following so far: public string ConvertToProperNameCase(string input) { TextInfo textInfo = new CultureInfo("en-US", false).TextInfo; char[] chars = textInfo.ToTitleCase(input.ToLower()).ToCharArray(); for (int i = 0; i + 1 < chars.Length; i++) { if ((chars[i].Equals('\'')) || (chars[i].Equals('-'))) { chars[i + 1] = Char.ToUpper(chars[i + 1]); } } return new string(chars);; } It works in most cases such as: JOHN SMITH - John Smith SMITH, JOHN T - Smith, John T JOHN O'BRIAN - John O'Brian JOHN DOE-SMITH - John Doe-Smith There are some edge cases that do no work like: JASON MCDONALD - Jason Mcdonald (Correct: Jason McDonald) OSCAR DE LA HOYA - Oscar De La Hoya (Correct: Oscar de la Hoya) MARIE DIFRANCO - Marie Difranco (Correct: Marie DiFranco) These are not captured and I am not sure if I can handle all these odd edge cases. Can anyone think of anything I could change or add to capture more edge case? I am sure there are tons of edge cases I am not even thinking of as well. All casing should following North American conventions too meaning that if certain countries expect a specific capitalization format, and that differs from the North American format, then the North American format takes precedence.

    Read the article

  • Handling close-to-impossible collisions on should-be-unique values

    - by balpha
    There are many systems that depend on the uniqueness of some particular value. Anything that uses GUIDs comes to mind (eg. the Windows registry or other databases), but also things that create a hash from an object to identify it and thus need this hash to be unique. A hash table usually doesn't mind if two objects have the same hash because the hashing is just used to break down the objects into categories, so that on lookup, not all objects in the table, but only those objects in the same category (bucket) have to be compared for identity to the searched object. Other implementations however (seem to) depend on the uniqueness. My example (that's what lead me to asking this) is Mercurial's revision IDs. An entry on the Mercurial mailing list correctly states The odds of the changeset hash colliding by accident in your first billion commits is basically zero. But we will notice if it happens. And you'll get to be famous as the guy who broke SHA1 by accident. But even the tiniest probability doesn't mean impossible. Now, I don't want an explanation of why it's totally okay to rely on the uniqueness (this has been discussed here for example). This is very clear to me. Rather, I'd like to know (maybe by means of examples from your own work): Are there any best practices as to covering these improbable cases anyway? Should they be ignored, because it's more likely that particularly strong solar winds lead to faulty hard disk reads? Should they at least be tested for, if only to fail with a "I give up, you have done the impossible" message to the user? Or should even these cases get handled gracefully? For me, especially the following are interesting, although they are somewhat touchy-feely: If you don't handle these cases, what do you do against gut feelings that don't listen to probabilities? If you do handle them, how do you justify this work (to yourself and others), considering there are more probable cases you don't handle, like a supernonva?

    Read the article

  • Random forests for short texts

    - by Jasie
    Hi all, I've been reading about Random Forests (1,2) because I think it'd be really cool to be able to classify a set of 1,000 sentences into pre-defined categories. I'm wondering if someone can explain to me the algorithm better, I think the papers are a bit dense. Here's the gist from 1: Overview We assume that the user knows about the construction of single classification trees. Random Forests grows many classification trees. To classify a new object from an input vector, put the input vector down each of the trees in the forest. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes (over all the trees in the forest). Each tree is grown as follows: If the number of cases in the training set is N, sample N cases at random - but with replacement, from the original data. This sample will be the training set for growing the tree. If there are M input variables, a number m « M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing. Each tree is grown to the largest extent possible. There is no pruning. So, does this look right? I'd have N = 1,000 training cases (sentences), M = 100 variables (let's say, there are only 100 unique words across all sentences), so the input vector is a bit vector of length 100 corresponding to each word. I randomly sample N = 1000 cases at random (with replacement) to build trees from. I pick some small number of input variables m « M, let's say 10, to build a tree off of. Do I build tree nodes randomly, using all m input variables? How many classification trees do I build? Thanks for the help!

    Read the article

  • When should we use private variables and when should we use properties.

    - by Shantanu Gupta
    In most of the cases we usually creates a private variable and its corresponding public properties and uses them for performing our functionalities. Everyone has different approach like some ppl uses properties every where and some uses private variables within a same class as they are private and opens it to be used by external environment by using properties. Suppose I takes a scenario say insertion in a database. I creates some parameters that need to be initialized. I creates 10 private variables and their corresp public properties which are given as private string name; public string Name { get{return name;} set{name=value;} } and so on. In these cases what should be used internally variables or properties. And in those cases like public string Name { get{return name;} set{name=value>5?5:0;} //or any action can be done. this is just an eg. } In such cases what should be done.

    Read the article

  • When should we use private variables and when should we use properties. Do Backing Fields should be

    - by Shantanu Gupta
    In most of the cases we usually creates a private variable and its corresponding public properties and uses them for performing our functionalities. Everyone has different approach like some people uses properties every where and some uses private variables within a same class as they are private and opens it to be used by external environment by using properties. Suppose I takes a scenario say insertion in a database. I creates some parameters that need to be initialized. I creates 10 private variables and their corresp public properties which are given as private string name; public string Name { get{return name;} set{name=value;} } and so on. In these cases mentioned above, what should be used internal variables or properties. And in those cases like public string Name { get{return name;} set{name=value>5?5:0;} //or any action can be done. this is just an eg. } In such cases what should be done.

    Read the article

  • Nested and complicated select statement

    - by Selase
    What i want to do here is simple...display an ivestigators ID and him corresponding name... That can be easily done from the users table by selecting based on the user type. However i want to select only some type of investigators. The analogy here is investigators are assigned to an exhibit for them to investigate. One investigator can be assigned to a maximum of 3 cases only. Now during the assigning of investigators, i want to write a select statement that would retrieve only investigatorID's that have been assigned to less than or equal to 2 cases. I have included exhibit and users table that shows sample data below. Now i sort of have an idea that i will have to first of all pick out all the investigators by their ID from the users list and then filter them through the exhibit table by dropping those assigned to 3 cases and leaving just those with two cases. then afterwards i use this IDs to select the Investigators name. the big questions is how do i write the statement??

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • recommendation for good chassis (case) for first time PC builder

    - by studiohack
    I've been thinking about building my own machine for some time now, and whenever I look at the PC case market, it seems like cases are a dime-a-dozen. As a result, I'm wondering what cases Super Users would recommend in the areas of ease of use, cable management, cooling, etc...in other words, an all-around case for a first time PC builder. Thanks!

    Read the article

  • Can I do multi-computer and multi-monitor configuration with Synergy?

    - by BrianLy
    I've been asked to setup a demo room with multiple computers and monitors. We need to be able to use multiple monitors with a single PC in some cases. In other cases we need to switch between Mac and PC platforms. We would also like to be able to throw up slides or other information to screens which are not being used. Is it possible to do this with Synergy?

    Read the article

  • Can someone find me a reference to this quote?

    - by Robot
    I'm looking for a reasonable reference to a known software personality who said something along the lines "make sure your software runs the most common cases fast/easy but all cases are possible". I'm sure there are many 80/20 quotes, so I'm looking for the most famous that gets that point across. -Robot

    Read the article

  • JMS Topic vs Queue - Intent

    - by Sandeep Jindal
    I am trying to understand on the design requirements for using Queue, and could not find this question (with answer). My understanding: Queue means one-to-one. Thus it would be used in a special case (if not rare, very few cases) when a designer is sure that the message would be intended for only one consumer. But even in those cases, I may want to use Topic (just to be future safe). The only extra case I would have to do is to make (each) subscription durable. Or, I special situations, I would use bridging / dispatcher mechanism. Give above, I would always (or in most cases) want to publish to a topic. Subscriber can be either durable topic(s) or dispatched queue(s). Please let me know what I am missing here or I am missing the original intent?

    Read the article

  • Software Requirement Specifications for Web Applications

    - by illuminatedtiger
    Hi guys, I'm looking for some guidance/books to read when it comes to creating a software requirement specification for a web application. For inspiration I have read some spec documents for desktop based applications. The documents I have read capture a systems functional requirements in use cases which tend to be rather data oriented with use cases centered around the various CRUD operations the application is intended to perform. I like this structure however I'm finding it rather difficult to marry it to what my web application needs to do, mostly reading data as opposed to manipulating it. I've had a go at writing some use cases however they all tend to boil down to "Search for item", "Change view of search results" or "User selects facet to refine search results". This doesn't sound quite right to me and makes me wonder if I'm going about this the right way. Are there planning differences between web based and desktop based applications?

    Read the article

  • PHP MVC correct usage

    - by Ratt
    What is the correct (recommended method) for passing information to a view in a MVC environment. Currently we use Zend Framework, where we write classes to handle specific things EG a Book class with a save and load method to retrieve info from the DB, which is called from a particular nameAction(). What I would like to know is what is the best way to pass this information to the view, in some cases we do $this-view-book_name = $book-getBookName(); and in other cases we do the following $this-view-book = $book; OR $this-view-books = Book_Manager::getAllBooks(); and then access the object(s) properties in the view. Information on-line suggests we try limit what access a view has to information, i.e pass them only what they need and in some cases people say its ok to pass stuff through as long as nothing is done to that information. Regards

    Read the article

  • IronPython For Unit Testing over C#

    - by Krish
    We know that Python provides a lot of productivity over any compiled languages. We have programming in C# & need to write the unit test cases in C# itself. If we see the amount of code we write for unit test is approximately ten times more than the original code. Is it ideal choice to write unit test cases in IronPython instead of C#? Any body has done like that? I wrote few test cases, they seems to be good. But hairy pointy managers won't accept.

    Read the article

  • Control flow graph & cyclometric complexity for folowing procedure

    - by softyGuy
    insertion_procedure (int a[], int p [], int N) { int i,j,k; for (i=0; i<=N; i++) p[i] = i; for (i=2; i<=N; i++) { k = p[i]; j = 1; while (a[p[j-1]] > a[k]) {p[j] = p[j-1]; j--} p[j] = k; } } I have to find cyclometric complexity for this code and then suggest some white box test cases and black box test cases. But I am having trouble making a CFG for the code. Would appreciate some help on test cases as well. Thanks a bunch in advance!

    Read the article

  • How to find largest common sub-tree in the given two binary search trees?

    - by Bhushan
    Two BSTs (Binary Search Trees) are given. How to find largest common sub-tree in the given two binary trees? EDIT 1: Here is what I have thought: Let, r1 = current node of 1st tree r2 = current node of 2nd tree There are some of the cases I think we need to consider: Case 1 : r1.data < r2.data 2 subproblems to solve: first, check r1 and r2.left second, check r1.right and r2 Case 2 : r1.data > r2.data 2 subproblems to solve: - first, check r1.left and r2 - second, check r1 and r2.right Case 3 : r1.data == r2.data Again, 2 cases to consider here: (a) current node is part of largest common BST compute common subtree size rooted at r1 and r2 (b)current node is NOT part of largest common BST 2 subproblems to solve: first, solve r1.left and r2.left second, solve r1.right and r2.right I can think of the cases we need to check, but I am not able to code it, as of now. And it is NOT a homework problem. Does it look like?

    Read the article

  • Is it possible to use Sphinx search with dynamic conditions?

    - by Fedyashev Nikita
    In my web app I need to perform 3 types of searching on items table with the following conditions: items.is_public = 1 (use title field for indexing) - a lot of results can be retrieved(cardinality is much higher than in other cases) items.category_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results items.user_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results I can't find a way to make Sphinx work in all these cases, but it works well in 1st case. Should I use Sphinx just for the 1st case and use plain old "slow" FULLTEXT searching in MySQL(at least because of lower cardinality in 2-3 cases)? Or is it just me and Sphinx can do pretty much everything?

    Read the article

  • In .net, how do I choose between a Decimal and a Double

    - by Ian Ringrose
    We were discussing this the other day at work and I wish there was a Stackoverflow question I would point people at so here goes.) What is the difference between a Double and a Decimal? When (in what cases) should you always use a Double? When (in what cases) should you always use a Decimal? What’s the diver factors to consider in cases that don’t fall into one of the two camps above? (There a lot of questions that overlap this question, but they tend to be asking what someone should do in a given case, not how to decide in the general case)

    Read the article

  • Sending bulk notification emails without blocking

    - by FreshCode
    For my client's custom-built CRM, I want users (technicians) to be notified of changes to marked cases via email. This warrants a simple subscription mapping table between users and cases and automated emails to be sent every time a change is made to a case from within the logging method. How do I send 10-100 emails to subscribed users without bogging down my logging method? My SMTP server is on a peer on my LAN, so sends should be quick, but ideally this should be handled by an external queuing process. I can have a cron job send any outstanding emails every 10 minutes, but for this specific client cases are quite time-sensitive and instant notification (as instant as email can be) would be great. How can I send bulk notification emails from within ASP.NET MVC without bogging down my logging method?

    Read the article

  • Microsoft SQL Server 2005/2008 SSIS are oversized

    - by Ice
    In this case i'm old style and loved 'my fathers DTS' from SQL 2000. Most of the cases i have to import a flatfile into a table. In a second step i use some procedures (with the new MERGE-Statement) to process the imported content. For Export, i define a export-table and populate it with a store proc (containing a MERGE-Statement) and in a second step the content will be exported to a flat file. In some cases there is no flat file because there is annother sql-server or in rare cases an ODBC-Connection to a sybase or similar. What do you think? When it comes to complex ETL-Stuff the SSIS may be the right tool...but i haven't seen such a case yet.

    Read the article

  • Bitfield mask/operations with optional items

    - by user1560249
    I'm trying to find a way to handle several bitfield cases that include optional, required, and not allowed positions. yy?nnn?y 11000001 ?yyy?nnn 01110000 nn?yyy?n 00011100 ?nnn?yyy 00000111 In these four cases, the ? indicates that the bit can be either 1 or 0 while y indicates a 1 is required and n indicates that a 0 is required. The bits to the left/right of the required bits can be anything and the remaining bits must be 0. Is there a masking method I can use to test if an input bit set satisfies one of these cases?

    Read the article

  • Perl launched from Java takes forever

    - by Wade Williams
    I know this is an absolute shot in the dark, but we're absolutely perplexed. A perl (5.8.6) script run by Java (1.5) is taking more than an hour to complete. The same script, when run manually from the command line takes 12 minutes to complete. This is on a Linux host. Logging is the same in both cases and the script is run with the same parameters in both cases. The script does some complex stuff like Oracle DB access, some scp's, etc, but again, it does the exact same actions in both cases. We're stumped. Has anyone ever run into a similar situation? If not and if you were faced with the same situation, how would you consider debugging it?

    Read the article

  • Can I detect unused extra parameters passed to javascript methods?

    - by Pablojim
    In Javascript I can call any method with more than the necessary amount of parameters and the extra parameters are silently ignored. e.g. letters = ['a','b','c'] //correct letters.indexOf('a') //This also works without error or warning letters.indexOf('a', "blah", "ignore me", 38) Are there ways to detect cases where this occurs? My motivation is that in my experience cases where this occurs are usually bugs. Identification of these by code analysis or at runtime would help track these errors down. These cases are especially prevalent where people are expecting alterations to base types which may not have occurred. Logging a warning where this happens e.g. Date.parse('02--12--2012', 'dd--MM--YYYY') Notes: To be clear I would like a solution that doesn't involve me sprinkling checks all over my code and other peoples' code.

    Read the article

  • When should one let an application crash because of an exception in Java (design issue)?

    - by JVerstry
    In most cases, it is possible to catch exceptions in Java, even unchecked ones. But, it is not necessarily possible to do something about it (for example out of memory). For other cases, the issue I am trying to solve is a design principle one. I am trying to set-up a design principle or a set of rules indicating when one should give up on an exceptional situation, even if it is detected in time. The objective is trying to not crash the application as much as possible. Has someone already brainstormed and communicated about this? I am looking for specific generic cases and possible solutions, or thumb-rules. UPDATE Suggestions so far: Stop running if data coherency can be compromised Stop running if data can be deleted Stop running if you can't do anything about it (Out of memory...) Stop running if key service is not available or becomes unavailable and cannot be restarted If application must be stopped, degrade as gracefully as possible Use rollbacks in db transactions Log as much relevant information as you can Notify the developers

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >