Search Results

Search found 4885 results on 196 pages for 'delivery failure'.

Page 54/196 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • StreamInsight Now Available Through Microsoft Update

    - by Roman Schindlauer
    We are pleased to announce that StreamInsight v1.1 is now available for automatic download and install via Microsoft Update globally. In order to enable agile deployment of StreamInsight solutions, you have asked of us a steady cadence of releases with incremental, but highly impactful features and product improvements. Following our StreamInsight 1.0 launch in Spring 2010, we offered StreamInsight 1.1 in Fall 2010 with implicit compatibility and an upgraded setup to support side by side installs. With this setup, your applications will automatically point to the latest runtime, but you still have the choice to point your application back to a 1.0 runtime if you choose to do so. As the next step, in order to enable timely delivery of our releases to you, we are pleased to announce the support for automatic download and install of StreamInsight 1.1 release via Microsoft Update starting this week. If you have a computer: that is subscribed to Microsoft Update (different from Windows Update) has StreamInsight 1.0 installed, and does not yet have StreamInsight 1.1 installed, Microsoft Update will automatically download and install the corresponding StreamInsight 1.1 update side by side with your existing StreamInsight 1.0 installation – across all supported 32-bit and 64-bit Windows operating systems, across 11 supported languages, and across StreamInsight client and server SKUs. This is also supported in WSUS environments, if all your updates are managed from a corporate server (please talk to the WSUS administrator in your enterprise). As an example, if you have SI Client 1.0 DEU and SI Server 1.0 ENU installed on the same computer, Microsoft Update will selectively download and side-by-side install just the SI Client 1.1 DEU and SI Server 1.1 ENU releases. Going forward, Microsoft Update will be our preferred mode of delivery – in addition to support for our download sites, and media based distribution where appropriate. Regards, The StreamInsight Team

    Read the article

  • At the Java DEMOgrounds - ZeroTurnaround and its LiveRebel 2.5

    - by Janice J. Heiss
    At the ZeroTurnaround demo, I spoke with Krishnan Badrinarayanan, their Product Marketing Manager. ZeroTurnaround, the creator of JRebel and LiveRebel, describes itself on their site as a company “dedicated to changing the way the world develops, tests and runs Java applications."“We just launched LiveRebel 2.5 today,” stated Badrinarayanan, “which enables companies to embrace the concept and practice of continuous delivery, which means having a pipeline that takes products right from the developers to an end-user, faster, more frequently -- all the while ensuring that it’s a quality product that does not break in production. So customers don’t feel the discontinuity that something has changed under them and that they can’t deal with the change. And all this happens while there is zero down time.”He pointed out that Salesforce.com is not useable from 3 a.m. to 5 a.m. on Saturday because they are engaged in maintenance. “With LiveRebel 2.5, you can unify the whole delivery chain without having any downtime at all,” he said. “There are many products that tell customers to take their tools and change how they work as an organization so that you they have to conform to the way the tool prescribes them to work as an application team. We take a more pragmatic approach. A lot of companies might use Jenkins or Bamboo to do continuous integration. We extend that. We say, take our product, take LiveRebel okay, and integrate it with Jenkins – you can do that quickly, so that, in half a day, you will be up and running. And let LiveRebel automate your deployment processes and all the automated tasks that go with it. Right from tests to the staging environment to production -- all with zero downtime and with no impact on users currently using the system.” “So if you were to make the update right now and you had 100 users on your system, they would not even know this was happening. It would maintain their sessions and transfer them over to the new version, all in the background.”

    Read the article

  • MySQL Enterprise Monitor 2.3.11 Is Now Available!

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 2.3.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. This is a maintenance release that contains several new features and fixes a number of bugs. You can find more information on the contents of this release in the changelog: http://dev.mysql.com/doc/mysql-monitor/2.3/en/mem-news-2-3-11.html You will find binaries for the new release on My Oracle Support: https://support.oracle.com Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. And from the Oracle Software Delivery Cloud (in about 1-2 weeks): http://edelivery.oracle.com/ Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. If you haven't looked at 2.3 recently, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • How "commercially savvy" should software developers be? [closed]

    - by mattnz
    I have been watching answers to many questions on this site, and have come to the conclusion that commercial pragmatism does not factor into many software development discussions. As a result, I seriously wonder at the commercial skills within the industry, specifically the ability to deliver projects on time and to a budget. I see no indication from the site that commercially successful project delivery is a serious concern, yet the industry has a reputation for poor performance in this. Rarely, if ever, does the cost of time factor into discussions. I have never seen concepts such as opportunity cost, time to market, competitive advantage or cash flow mentioned, let alone discussed in technical answers to questions. How can you answer virtually any question without understanding the commercial background on which it is asked? Even Open source projects have a need to operate efficiently and deploy their limited resources to providing the most value for effort. Typically small start-ups have cash flow issues that outweigh longevity concerns, yet they are typically still advised to build for a future they probably won’t have if they do. Is it fair to say that these problems are solely the Managers and Project managers to solve, or are we, as developers, also responsible for ensuring successful on time, within budget delivery of projects, even if those budgets do not allow use to achieve engineering excellence?

    Read the article

  • Three Steps to Becoming an Expert Oracle Linux System Administrator

    - by Antoinette O'Sullivan
    Oracle provides a complete system administration curriculum to take you from your initial experience of Unix to being an expert Oracle Linux system administrator. You can take these live instructor-led courses from your own desk through live-virtual events or by traveling to an education center through in-class events. Step 1: Unix and Linux Essentials This 3-day course is designed for users and administrators who are new to Oracle Linux. It will help you develop the basic UNIX skills needed to interact comfortably and confidently with the operating system. Below is a sample of the in-class events already on the schedule.  Location  Date  Delivery Language  Vivoorde, Belgium  28 October 2013  English  Berlin, Germany  15 July 2013  German  Utrecht, Netherlands  19 August 2013  Dutch  Bucarest, Romania  12 August 2013  Romanian  Ankara, Turkey  6 January 2013  Turkish  Nairobi, Kenya  5 August 2013  English  Kaduna, Nigeria  15 July 2013  English   Woodmead, South Africa  15 July 2013  English   Jakarta, Indonesia  23 September 2013  English  Petaling Jaya, Malaysia  22 July 2013  English  Makati City, Philippines  3 July 2013  English  Bangkok, Thailand  20 November 2013  English  Auckland, New Zealand  5 August 2013  English  Melbourne, Australia  12 August 2013  English  Ottawa, Montreal, Toronto, Canada  3 September 2013  English  San Francisco and San Jose, CA, United States  15 July 2013  English  Reston, VA, United States  7 August 2013  English  Edison, NJ, and King of Prussia, PA, United States  3 September 2013  English  Denver, CO, United States  25 September 2013  English  Cambridge, MA, and Roseville MN, United States  6 November 2013  English  Phoenix, AZ, and Sacramento, CA, United States  25 November 2013  English Step 2: Oracle Linux System Administration Through this 5-day course, become a knowledgeable Oracle Linux system administrator, learning how to install Oracle Linux and the benefits of Oracle's Unbreakable Enterprise Kernel and Ksplice. Below is a sample of in-class events already on the schedule.  Location  Date  Delivery Language  Vienna, Austria  1 July 2013  German  Vivoorde, Belgium  18 November 2013  English  Zagreb, Croatia  16 September 2013  Croatian  London, England  3 September 2013  English  Manchester, England  9 September 2013  English  Paris, France  29 July 2013  French  Budapest, Hungary  8 July 2013  Hungarian  Utrecht, Netherland  2 September 2013  Dutch  Warsaw, Poland  15 July 2013  Polish  Bucharest, Romania  2 December 2013  Romanian  Ankara, Turkey  7 October 2013  Turkish  Istanbul, Turkey  9 September 2013  Turkish  Nairobi, Kenya  12 August 2013  English  Petaling Jaya, Malaysia  29 July 2013  English  Kuala Lumpur, Malaysia  21 October 2013  English  Makati City, Philippines  8 July 2013  English  Singapore  24 July 2013  English  Bangkok, Thailand  26 July 2013  English  Canberra, Australia  19 August 2013  English  Melbourne, Australia  16 September 2013  English   Sydney, Australia 19 August 2013   English   Mississauga, Canada  26 August 2013  English  Ottawa, Canada  4 November 2013  English  Phoenix, AZ, United States  7 October 2013  English  Belmont, CA, United States  23 September 2013  English  Irvine, CA, United States  18 November 2013  English  Sacramento, CA, United States  19 August 2013  English  San Francisco, CA, United States  15 July 2013  English  Denver, CO, United States  19 August 2013  English  Schaumburg, IL, United States  26 August 2013  English  Indianapolis, IN, United States  14 October 2013  English  Columbia, MD, United States  30 September 2013  English  Roseville, MN, United States  19 August 2013  English  St Louis, MO, United States  7 October 2013  English  Edison, NJ, United States  28 October 2013  English  Beaverton, OR, United States  12 August 2013  English  Pittsburg, PA, United States 9 December 2013   English  Reston, VA, United States 12 August 2013   English  Brookfield, WI, United States 30 September 2013   English  Sao Paolo, Brazil 15 July 2013   Brazilian Portugese Step 3: Oracle Linux Advanced System Administration This new 3-day course is ideal for administrators who want to learn about managing resources and file systems while developing troubleshooting and advanced storage administration skills. You will learn about Linux Containers, Cgroups, btrfs, DTrace and more. Below is a sample of in-class events already on the schedule.  Location  Date  Delivery Language  Melbourne, Australia  9 October 2013  English  Roseville, MN, United States  3 September 2013  English To register for or learn more about these courses, go to http://oracle.com/education/linux. Watch this video to learn more about Oracle's operating system training.

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • Easing the Journey to the Private Cloud with Oracle Consulting

    - by MichaelM-Oracle
    By Sanjai Marimadaiah, Senior Director, Strategy & Business Development – Cloud Solutions, Oracle Consulting Services Business leaders are now leading the charge on how their firms can profit from cloud solutions. Agility and innovation are becoming the primary drivers of the business case for the cloud, even more than the anticipated cost savings. Leaders need to find the right strategy and optimize the use of cloud-based applications across their enterprise-computing infrastructure. The Problem – Current State With prevalent IT practices, many organizations find that they run multiple IT solutions serving similar business needs. This has led to the proliferation of technology stacks, for example: Oracle 10g on Sun T4 running Solaris 9; Oracle 11g on Exadata running Linux; or Oracle 12c on commodity x86 servers. This variance has a huge impact on an organization’s agility and expenses, and requires IT professionals with varied skills as well as on-going training for different systems and tools. Fortunately there is a practical business strategy to overcome this unneeded redundancy. Thus begins a journey to the right cloud computing solution. The Solution – Cloud Services from Oracle Consulting Services (OCS) Oracle Consulting Services (OCS ) works closely with our clients as trusted advisors to proactively respond to business needs and IT concerns. OCS understands that making the transition to cloud solutions begins with a strategic conversation, based on its deep expertise for successfully completing private cloud service engagements with several companies. For a journey to the cloud, Oracle Consulting Services leads the client through four phases– standardization, consolidation, service delivery, and enterprise cloud – to achieve optimal returns. Phase 1 - Standardization Oracle Consulting Services (OCS) works with clients to evaluate their business requirements and propose a set of standard solutions stacks for various IT solutions. This is an opportune time to evaluate cloud ready solutions, such as Oracle 12c, Oracle Exadata, and the Oracle Database Appliance (ODA). The OCS consultants, together with the delivery team, then turn to upgrading and migrating existing solution stacks to standardized offerings. OCS has the expertise and tools to complete this stage in a fraction of the time required by other IT services companies. Clients quickly realize cost savings in tools, processes, and type/number of resources required. This standardization also improves agility of the IT organizations and their abilities to respond to the needs of various business units. Phase 2 - Consolidation During the consolidation phase, OCS consultants programmatically consolidate hundreds of databases into a smaller number of servers to improve utilization, reduce floor space, and optimize maintenance costs. Consolidation helps clients realize huge savings in CapEx investments and shrink OpEx costs. The use of engineered systems, such as Oracle Exadata, greatly reduces the client’s risk of moving to a new solution stack. OCS recommends clients to pursue Phase 1 (Standardization) and Phase 2 (Consolidation) simultaneously to reduce the overall time, effort, and expense of the cloud journey. Phase 3 - Service Delivery Once a client is on a path of standardization and consolidation, OCS consultants create Service Catalogues based on the SLAs requirements and the criticality of the solutions. The number and types of Service Catalogues (Platinum, Gold, Silver, Bronze, etc.) vary from client to client. OCS consultants also implement a variety of value-added cloud solutions, including monitoring, metering, and charge-back solutions. At this stage, clients are able to achieve a high level of understanding in their cloud journey. Their IT organizations are operating efficiently and are more agile in responding to the needs of business units. Phase 4 - Enterprise Cloud In the final phase of the cloud journey, the economics of the IT organizations change. Business units can request services on-demand; applications can be deployed and consumed on a pay-as-you-go model. OCS has the expertise and capabilities to establish processes, programs, and solutions required for IT organizations to transform how they interact with business units. The Promise of Cloud Solutions Depending the size and complexity of their business model, some clients are able to abbreviate some phases of their cloud journey. Cloud solutions are still evolving and there is rapid pace of innovation to transform how IT organizations operate. The lesson is clear. Cloud solutions hold a lot of promise for business agility. Business leaders can now leverage an additional set of capabilities and services. They can ramp up their pace of innovation. With cloud maturity, they can compete more effectively in their respective markets. But there are certainly challenges ahead. A skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. Oracle Consulting Services has expertise and a portfolio of services to help clients succeed on their journey to the cloud.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • SSIS IsNumeric expression Error

    - by rmdussa
    Hi am using following exression in ssis package !ISNULL((DT_I4)Route) ? (DT_WSTR,50)("SB" + SUBSTRING(RIGHT(Route,2),1,1)) : (DT_WSTR,50)Route when the Route value is Numeric it is sucess, when it is Non-numeric failing with following description. Any help,how to resolve this issue [Derived Column [111]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "Derived Column" (111)" failed because error code 0xC0049067 occurred, and the error row disposition on "output column "column_New" (679)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Derived Column" (111) failed with error code 0xC0209029 while processing input "Derived Column Input" (112). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.

    Read the article

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • How can I Fail a WebTest?

    - by craigb
    I'm using Microsoft WebTest and want to be able to do something similar to NUnit's Assert.Fail(). The best i have come up with is to throw new webTestException() but this shows in the test results as an Error rather than a Failure. Other than reflecting on the WebTest to set a private member variable to indicate the failure, is there something I've missed? EDIT: I have also used the Assert.Fail() method, but this still shows up as an error rather than a failure when used from within WebTest, and the Outcome property is read-only (has no public setter). EDIT: well now I'm really stumped. I used reflection to set the Outcome property to Failed but the test still passes! Here's the code that sets the Oucome to failed: public static class WebTestExtensions { public static void Fail(this WebTest test) { var method = test.GetType().GetMethod("set_Outcome", BindingFlags.NonPublic | BindingFlags.Instance); method.Invoke(test, new object[] {Outcome.Fail}); } } and here's the code that I'm trying to fail: public override IEnumerator<WebTestRequest> GetRequestEnumerator() { this.Fail(); yield return new WebTestRequest("http://google.com"); } Outcome is getting set to Oucome.Fail but apparently the WebTest framework doesn't really use this to determine test pass/fail results.

    Read the article

  • Extjs to call a RESTful webservice

    - by VSC
    Hello, I am trying to make a RESTful webservice call using Extjs. Below is the code i am using: Ext.Ajax.request({ url: incomingURL , method: 'POST', params: {param1:p1, param2:p2}, success: function(responseObject){ var obj = Ext.decode(responseObject.responseText); alert(obj); }, failure: function(responseObject){ var obj = Ext.decode(responseObject.responseText); alert(obj); } }); but it does not work, the request is sent using OPTIONS method instead of POST. I also tried to do the same thing using below code but result is the same: var conn = new Ext.data.Connection(); conn.request({ url: incomingURL, method: 'POST', params: {param1:p1, param2:p2}, success: function(responseObject) { Ext.Msg.alert('Status', 'success'); }, failure: function(responseObject) { Ext.Msg.alert('Status', 'Failure'); } }); But when i tried to do the same thing using basic ajax call ( using the browser objects directly i.e. XMLHttpRequest() or ActiveXObject("Microsoft.XMLHTTP")) it works fine and i get the response as expected. Can anyone please help me, as i am not able to understand what i am doing wrong with extjs ajax call?

    Read the article

  • Shell script to emulate warnings-as-errors?

    - by talkaboutquality
    Some compilers let you set warnings as errors, so that you'll never leave any compiler warnings behind, because if you do, the code won't build. This is a Good Thing. Unfortunately, some compilers don't have a flag for warnings-as-errors. I need to write a shell script or wrapper that provides the feature. Presumably it parses the compilation console output and returns failure if there were any compiler warnings (or errors), and success otherwise. "Failure" also means (I think) that object code should not be produced. What's the shortest, simplest UNIX/Linux shell script you can write that meets the explicit requirements above, as well as the following implicit requirements of otherwise behaving just like the compiler: - accepts all flags, options, arguments - supports redirection of stdout and stderr - produces object code and links as directed Key words: elegant, meets all requirements. Extra credit: easy to incorporate into a GNU make file. Thanks for your help. === Clues === This solution to a different problem, using shell functions (?), Append text to stderr redirects in bash, might figure in. Wonder how to invite litb's friend "who knows bash quite well" to address my question? === Answer status === Thanks to Charlie Martin for the short answer, but that, unfortunately, is what I started out with. A while back I used that, released it for office use, and, within a few hours, had its most severe drawback pointed out to me: it will PASS a compilation with no warnings, but only errors. That's really bad because then we're delivering object code that the compiler is sure won't work. The simple solution also doesn't meet the other requirements listed. Thanks to Adam Rosenfield for the shorthand, and Chris Dodd for introducing pipefail to the solution. Chris' answer looks closest, because I think the pipefail should ensure that if compilation actually fails on error, that we'll get failure as we should. Chris, does pipefail work in all shells? And have any ideas on the rest of the implicit requirements listed above?

    Read the article

  • Help with PHP mailer script

    - by Chris
    Hi there, I'm clueless when it comes to PHP and have a script that emails the contents of a form. The trouble is it only sends me the comment when I want it to also send the name and email address that is captured. Anyone know how I can adjust this script to do this? A million thanks in advance! <?php error_reporting(E_NOTICE); function valid_email($str) { return ( ! preg_match("/^([a-z0-9\+_\-]+)(\.[a-z0-9\+_\-]+)*@([a-z0-9\-]+\.)+[a-z]{2,6}$/ix", $str)) ? FALSE : TRUE; } if($_POST['name']!='' && $_POST['email']!='' && valid_email($_POST['email'])==TRUE && strlen($_POST['comment'])>1) { $to = "[email protected]"; $headers = 'From: '.$_POST['email'].''. "\r\n" . 'Reply-To: '.$_POST['email'].'' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); $subject = "Contact Form"; $message = htmlspecialchars($_POST['comment']); if(mail($to, $subject, $message, $headers)) { echo 1; //SUCCESS } else { echo 2; //FAILURE - server failure } } else { echo 3; //FAILURE - not valid email } ?>

    Read the article

  • hudson.util.ProcessTreeTest test error

    - by senzacionale
    error: Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec Running hudson.util.ProcessTreeTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! Running hudson.model.LoadStatisticsTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.089 sec Running hudson.util.ArgumentListBuilderTest Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec Running hudson.util.RobustReflectionConverterTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec Running hudson.util.VersionNumberTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec Running hudson.util.CyclicGraphDetectorTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.038 sec Results : Tests in error: testRemoting(hudson.util.ProcessTreeTest) Tests run: 102, Failures: 0, Errors: 1, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to D:\PROJEKTI\Maven\hudson\main\core\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 58 seconds [INFO] Finished at: Fri Jun 11 21:04:46 CEST 2010 [INFO] Final Memory: 85M/152M [INFO] ------------------------------------------------------------------------ error log: ------------------------------------------------------------------------------- Test set: hudson.util.ProcessTreeTest ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! testRemoting(hudson.util.ProcessTreeTest) Time elapsed: 0.169 sec <<< ERROR! org.jvnet.winp.WinpException: Failed to read environment variable table error=299 at .\envvar-cmdline.cpp:114 at org.jvnet.winp.Native.getCmdLineAndEnvVars(Native Method) at org.jvnet.winp.WinProcess.parseCmdLineAndEnvVars(WinProcess.java:114) at org.jvnet.winp.WinProcess.getEnvironmentVariables(WinProcess.java:109) at hudson.util.ProcessTree$Windows$1.getEnvironmentVariables(ProcessTree.java:419) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:274) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:255) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:215) at hudson.remoting.UserRequest.perform(UserRequest.java:114) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:270) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) does anyone have any idea what can be wrong in test? Regards

    Read the article

  • EC2 SSH access from fedora

    - by Randika Rathugama
    I'm trying to connect to existing instance of EC2 with a new PEM. But I get this error when I try to connect. Here is what I did so far. I created the PEM on EC2 and saved it to ~/.ssh/my-fedora.pem and ran this command; is there anything else I should do? [randika@localhost ~]$ ssh -v -i ~/.ssh/my-fedora.pem [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0-fips-beta4 10 Nov 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com [xx-xx-xx-xx] port 22. debug1: Connection established. debug1: identity file /home/randika/.ssh/saberion-fedora.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.7 debug1: match: OpenSSH_4.7 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/randika/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: [email protected] debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Offering public key: [email protected] debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Trying private key: /home/randika/.ssh/saberion-fedora.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic).

    Read the article

  • Hudson fails to use unix user/group to do authentication

    - by Kane
    I'm trying to use unix user/group database as security realm of hudson. The linux server is using NIS for user management. My account could login the hudson server via ssh. And the hudson server is running by user 'hudson' that is also a member of group 'shadow', so hudson could read /etc/shadow. And I tested the configuration using 'test' button, hudson tells me it works well. But I can't use my unix account and password to login the hudson sever. And I found below java exception in the log of hudson, Jan 12, 2011 8:23:42 AM hudson.security.AuthenticationProcessingFilter2 onUnsuccessfulAuthentication INFO: Login attempt failed org.acegisecurity.BadCredentialsException: pam_authenticate failed : Authentication failure; nested exception is org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure at hudson.security.PAMSecurityRealm$PAMAuthenticationProvider.authenticate(PAMSecurityRealm.java:100) at org.acegisecurity.providers.ProviderManager.doAuthentication(ProviderManager.java:195) at org.acegisecurity.AbstractAuthenticationManager.authenticate(AbstractAuthenticationManager.java:45) at org.acegisecurity.ui.webapp.AuthenticationProcessingFilter.attemptAuthentication(AuthenticationProcessingFilter.java:71) at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:252) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.ui.basicauth.BasicProcessingFilter.doFilter(BasicProcessingFilter.java:173) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249) at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:66) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76) at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:164) at winstone.FilterConfiguration.execute(FilterConfiguration.java:195) at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368) at winstone.RequestDispatcher.forward(RequestDispatcher.java:333) at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:244) at winstone.RequestHandlerThread.run(RequestHandlerThread.java:150) at java.lang.Thread.run(Thread.java:595) Caused by: org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure at org.jvnet.libpam.PAM.check(PAM.java:105) at org.jvnet.libpam.PAM.authenticate(PAM.java:123) at hudson.security.PAMSecurityRealm$PAMAuthenticationProvider.authenticate(PAMSecurityRealm.java:90) ... 18 more

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • Hudson Maven build fails using workspace POM, works when pointing to development copy

    - by Deejay
    I'm developing a series of web applications using Eclipse IDE, Maven, SVN, and Hudson for CI. When I specify the "Root POM" option in my Hudson job to be the copy of pom.xml in its workspace directory, the build fails citing compilation failure due to missing classpath entries. [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\model\User.java:[24,42] package org.hibernate.validator.constraints does not exist C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\dao\UserGroupHibernateSupportDao.java:[8,20] package org.hibernate does not exist C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\dao\UserGroupHibernateSupportDao.java:[10,49] package org.springframework.orm.hibernate3.support does not exist When I specify the "Root POM" to be the copy of pom.xml in my Eclipse workspace, it builds just fine. It builds fine from Eclipse too. I want to move Hudson over to a separate machine so several developers can use it, so I can't very well point to my own development workspace to give it a POM. If I try putting an SVN URL in the "root pom.xml" option, it says file not found. What should I be entering here for a project worked on by several developers, and hosted in an SVN repository?

    Read the article

  • Error handling in C++, constructors vs. regular methods

    - by Dennis Ritchie
    I have a cheesesales.txt CSV file with all of my recent cheese sales. I want to create a class CheeseSales that can do things like these: CheeseSales sales("cheesesales.txt"); //has no default constructor cout << sales.totalSales() << endl; sales.outputPieChart("piechart.pdf"); The above code assumes that no failures will happen. In reality, failures will take place. In this case, two kinds of failures could occur: Failure in the constructor: The file may not exist, may not have read-permissions, contain invalid/unparsable data, etc. Failure in the regular method: The file may already exist, there may not be write access, too little sales data available to create a pie chart, etc. My question is simply: How would you design this code to handle failures? One idea: Return a bool from the regular method indicating failure. Not sure how to deal with the constructor. How would seasoned C++ coders do these kinds of things?

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • VisualSVN How to roll back the revision number ?

    - by Ita
    The company I work in has suffered a major server failure. During this failure the SVN Repository was lost. But there is still hope ! We have an old backup of the repository which I've managed to successfully restore using VisualSVN. The problem I'm facing now is that I can't update / commit pre-failure checkedout folders. The reason for this problem is that for instance: a local folder has a revision number of 2361, while the repository itself holds a revision number of 2290, which is older. Is there a way to deal with this issue ? Can I some how change the revision numbers on either the local copy or the server copy? A few points: I'm using TortoiseSVN 1.6.6. I can checkout folders from the repo and the connection is active. I've picked one of my folders and used the Relocate option on it. This helped me see that there is something wrong with the revision number I've experimented a bit with the merge option but this lead me now where special. (I'm open for suggestions ) Thank you for your time, Ita

    Read the article

  • XSLT special characters

    - by Apurv
    In the following XSL transformation how do I output the '<' and '' symbol? Input XML: <TestResult bugnumber="3214" testname="display.methods::close->test_ManyInvoke" errortype="Failure"><ErrorMessage><![CDATA[calling close() method failed - expected:<2>]]></ErrorMessage> XSLT: <xsl:template match="TestResult"> <xsl:variable name="errorMessage"> <xsl:value-of select="ErrorMessage" disable-output-escaping="yes"/> </xsl:variable> <Test name='{@testname}'> <TestResult> <Passed>false</Passed> <State>failure</State> <Metadata> <Entry name='bugnumber' value='{@bugnumber}' /> </Metadata> <TestOutput> <Metadata> <Entry name='ErrorMessage' value='{$errorMessage}' /> </Metadata> </TestOutput> </TestResult> </Test> </xsl:template> Output XML: <Test name="display.methods::close-&gttest_ManyInvoke"> <TestResult> <Passed>false</Passed> <State>failure</State> <Metadata> <Entry name="bugnumber" value="3214"/> </Metadata> <TestOutput> <Metadata> <Entry name="ErrorMessage" value="calling close() method failed - expected:&lt;2&gt;"/> </Metadata> </TestOutput> </TestResult> </Test>

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >