Search Results

Search found 80 results on 4 pages for 'robustness'.

Page 1/4 | 1 2 3 4  | Next Page >

  • android content provider robustness on provider crash

    - by user1298992
    On android platforms (confirmed on ICS), if a content provider dies while a client is in the middle of a query (i.e. has a open cursor) the framework decides to kill the client processes holding a open cursor. Here is a logcat output when i tried this with a download manager query that sleeps after doing a query. The "sleep" was to reproduce the problem. you can imagine it happening in a regular use case when the provider dies at the right/wrong time. And then do a kill of com.android.media (which hosts the downloadProvider). "Killing com.example (pid 12234) because provider com.android.providers.downloads.DownloadProvider is in dying process android.process.media" I tracked the code for this in ActivityManagerService::removeDyingProviderLocked Is this a policy decision or is the cursor access unsafe after the provider has died? It looks like the client cursor is holding a fd for an ashmem location populated by the CP. Is this the reason the clients are killed instead of throwing an exception like Binders when the server (provider) dies ?

    Read the article

  • Improve Microsoft Visual C++ Application Security and Robustness with SafeInt

    In this age of cloud computing, massive parallel systems and complex security threats like identity theft and decentralized botnets, devoting resources to combat the seemingly age-old issue of integer overflow appears distinctly passC). Despite the fact that integer overflow is such a well know problem, particularly within C and C++ programming, the problem remains a real issue from both a defect and security standpoint, that's why the introduction of the SafeInt template class in Visual C++ 2010 to address overflows is a great addition.

    Read the article

  • Linux software raid robustness

    - by Waxhead
    I have a 4 disk 5TB raid5 setup where a disk is showing signs of going down the drain. It is reporting media errors and from dmesg I can see that several read errors are corrected. smartctl does report "notifications" but no panic so far. Since new disks are rather expensive at the moment I am starting to pondering exactly how robust the linux md layer is. I would appreciate if someone could shed some light on how md actually deals with disk errors. For example how does md deal with write and read errors - what does it (really) take for disk to be rejected from an array. I also read that recently md got support for mapping out bad blocks. Does this mean that the read errors I've had would have been mapped out if I where running kernel 3.1 or would md still try to "work on them" to make them usable.

    Read the article

  • Linux software raid robustness for raid1 vs other raid levels

    - by Waxhead
    I have a raid5 running and now also a raid1 that I set up yesterday. Since raid5 calculates parity it should be able to catch silent data corruption on one disk. However for raid1 the disks are just mirrors. The more I think about it I figure that raid1 is actually quite risky. Sure it will save me from a disk failure but i might not be as good when it comes to protecting the data on disk (who is actually more important for me). How does Linux software raid actually store raid1 type data on disk? How does it know what spindle is giving corrupt data (if the disk(subsystem) is not reporting any errors) If raid1 really is not giving me data protection but rather disk protection is there some tricks I can do with mdadm to create a two disk "raid5 like" setup? e.g. loose capacity but still keep redundancy also for data!?

    Read the article

  • NFS robustness or weaknesses

    - by Thomas
    I have 2 web servers with a load balancer in front of them. They both have mounted a nfs share, so that they can share some common files, like images uploaded from the cms and some run time generated files. Is nfs robust? Are there any specific weaknesses I should now about? I know it does not support file locking but that does not matter to me. I use memcache to emulate file locking for the runtime generated files. Thanks

    Read the article

  • Reading data from a socket, considerations for robustness and security

    - by w.brian
    I am writing a socket server that will implement small portions of the HTTP and the WebSocket protocol, and I'm wondering what I need to take into consideration in order to make it robust/secure. This is my first time writing a socket-based application so please excuse me if any of my questions are particularly naive. Here goes: Is it wrong to assume that you've received an entire HTTP request (WebSocket request, etc) if you've read all data available from the socket? Likewise, is it wrong to assume you've only received one request? Is TCP responsible for making sure I'm getting the "message" all at once as sent by the client? Or do I have to manually detect the beginning and end of each "message" for whatever protocol I'm implementing? Regarding security: What, in general, should I be aware of? Are there any common pitfalls when implementing something like this? As always, any feedback is greatly appreciated.

    Read the article

  • binary file formats: need for error correction?

    - by Jason S
    I need to serialize some data in a binary format for efficiency (datalog where 10-100MB files are typical), and I'm working out the formatting details. I'm wondering if realistically I need to worry about file corruption / error correction / etc. What are circumstances where file corruption can happen? Should I be building robustness to corruption into my binary format? Or should I wrap my nonrobust-to-corruption stream of bytes with some kind of error correcting code? (any suggestions? I'm using Java) Or should I just not worry about this?

    Read the article

  • Robust LINQ to XML query for sibling key-value pairs

    - by awshepard
    (First post, please be gentle!) I am just learning about LINQ to XML in all its glory and frailty, trying to hack it to do what I want to do: Given an XML file like this - <list> <!-- random data, keys, values, etc.--> <key>FIRST_WANTED_KEY</key> <value>FIRST_WANTED_VALUE</value> <key>SECOND_WANTED_KEY</key> <value>SECOND_WANTED_VALUE</value> <!-- wanted because it's first --> <key>SECOND_WANTED_KEY</key> <value>UNWANTED_VALUE</value> <!-- not wanted because it's second --> <!-- nonexistent <key>THIRD_WANTED_KEY</key> --> <!-- nonexistent <value>THIRD_WANTED_VALUE</value> --> <!-- more stuff--> </list> I want to extract the values of a set of known "wanted keys" in a robust fashion, i.e. if SECOND_WANTED_KEY appears twice, I only want SECOND_WANTED_VALUE, not UNWANTED_VALUE. Additionally, THIRD_WANTED_KEY may or may not appear, so the query should be able to handle that as well. I can assume that FIRST_WANTED_KEY will appear before other keys, but can't assume anything about the order of the other keys - if a key appears twice, its values aren't important, I only want the first one. An anonymous data type consisting of strings is fine. My attempt has centered around something along these lines: var z = from y in x.Descendants() where y.Value == "FIRST_WANTED_KEY" select new { first_wanted_value = ((XElement)y.NextNode).Value, //... } My question is what should that ... be? I've tried, for instance, (ugly, I know) second_wanted_value = ((XElement)y.ElementsAfterSelf() .Where(w => w.Value=="SECOND_WANTED_KEY") .FirstOrDefault().NextNode).Value which should hopefully allow the key to be anywhere, or non-existent, but that hasn't worked out, since .NextNode on a null XElement doesn't seem to work. I've also tried to add in a .Select(t => { if (t==null) return new XElement("SECOND_WANTED_KEY",""); else return t; }) clause in after the where, but that hasn't worked either. I'm open to suggestions, (constructive) criticism, links, references, or suggestions of phrases to Google for, etc. I've done a fair share of Googling and checking around S.O., so any help would be appreciated. Thanks!

    Read the article

  • Must web apps support back button?

    - by Patrick Peters
    I did a system test on a new ASP.NET app. I encountered several exceptions when using the BACK button in my browser (IE 7). I stated in a review-record that the web-app must support the use of a BACK button (or at least handle it gracefully with for example session-time out warnings). The teamlead did not agree with me as he stated that a web-app should not support working with the back-button by default. Do you agree?

    Read the article

  • How can I find out which version of emacs introduced a function?

    - by Chris R
    I want to write a .emacs that uses as much of the mainline emacs functionality as possible, falling back gracefully when run under previous versions. I've found through trial and error some functions that didn't exist, for example, in emacs 22 but now do in emacs 23 on the rare occasion that I've ended up running my dotfiles under emacs 22. However, I'd like to take a more proactive approach to this, and have subsets of my dotfiles that only take effect when version = <some-threshold> (for example). The function I'm focusing on right now is scroll-bar-mode but I'd like a general solution. I have not seen a consistent source for this info; I've checked the gnu.org online docs, the function code itself, and so far nothing. How can I determine this, without keeping every version of emacs I want to support kicking around?

    Read the article

  • How can I programmatically tell if a caught IOException is because the file is being used by another

    - by Paul K
    When I open a file, I want to know if it is being used by another process so I can perform special handling; any other IOException I will bubble up. An IOException's Message property contains "The process cannot access the file 'foo' because it is being used by another process.", but this is unsuitable for programmatic detection. What is the safest, most robust way to detect a file being used by another process?

    Read the article

  • "set -e" in shell and command substitution

    - by ivant
    In shell scripts set -e is often used to make them more robust by stopping the script when some of the commands executed from the script exits with non-zero exit code. It's usually easy to specify that you don't care about some of the commands succeeding by adding || true at the end. The problem appears when you actually care about the return value, but don't want the script to stop on non-zero return code, for example: output=$(possibly-failing-command) if [ 0 == $? -a -n "$output" ]; then ... else ... fi Here we want to both check the exit code (thus we can't use || true inside of command substitution expression) and get the output. However, if the command in command substitution fails, the whole script stops due to set -e. Is there a clean way to prevent the script from stopping here without unsetting -e and setting it back afterwards?

    Read the article

  • Check and avoid if a char is being entered in a int

    - by John
    Hi.... This is a exremely stupid question but i need help with this.... I'm trying to make a small program that i made robust and needed some help with tht.... int num1; int num2 = 0; System.out.print("Enter number 1: "); num1 = kb.nextInt(); while(num2<num1) { System.out.print("Enter number 2: "); num2 = kb.nextInt(); } Number 2 has to be greater than number 1 Also i want the program to automatically check and ignore if the user enters a char instead of an int... Cause right now when a user enters lets say "r" instead of a number the program just exists....

    Read the article

  • E-commerce application how is this robustness criteria implemented?

    - by Akshar Prabhu Desai
    Consider the following use case 1. User selects a product to purchase on seller's site 2. Clicks on net-banking option and redirected to his bank website 3. Successfully makes the payment. 4. But before the payment gateway redirects him back to seller site the browser crashes. 5. Seller site reports that payment is not recived but the bank reports that payment has been made. What are the best practices to handle such cases?

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • Be liberal in what you accept... or not?

    - by Matthieu M.
    [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms (Double Metaphone) string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict.

    Read the article

  • Cisco 3560+ipservices -- IGMP snooping issue with TTL=1

    - by Jander
    I've got a C3560 with Enhanced (IPSERVICES) image, routing multicast between its VLANs with no external multicast router. It's serving a test environment where developers may generate multicast traffic on arbitrary addresses. Everything is working fine except when someone sends out multicast traffic with TTL=1, in which case the multicast packet suppression fails and the traffic is broadcast to all members of the VLAN. It looks to me like because the TTL is 1, the multicast routing subsystem doesn't see the packets, so it doesn't create a mroute table entry. If I send out packets with TTL=2 briefly, then switch to TTL=1 packets, they are filtered correctly until the mroute entry expires. My question: is there some trick to getting the switch to filter the TTL=1 packets, or am I out of luck? Below are the relevant parts of the config, with a representative VLAN interface. I can provide more info as needed. #show run ... ip routing ip multicast-routing distributed no ip igmp snooping report-suppression ! interface Vlan44 ip address 172.23.44.1 255.255.255.0 no ip proxy-arp ip pim passive ... #show ip igmp snooping vlan 44 Global IGMP Snooping configuration: ------------------------------------------- IGMP snooping : Enabled IGMPv3 snooping (minimal) : Enabled Report suppression : Disabled TCN solicit query : Disabled TCN flood query count : 2 Robustness variable : 2 Last member query count : 2 Last member query interval : 1000 Vlan 44: -------- IGMP snooping : Enabled IGMPv2 immediate leave : Disabled Multicast router learning mode : pim-dvmrp CGMP interoperability mode : IGMP_ONLY Robustness variable : 2 Last member query count : 2 Last member query interval : 1000

    Read the article

  • Redehost Transforms Cloud & Hosting Services with MySQL Enterprise Edition

    - by Mat Keep
    RedeHost are one of Brazil's largest cloud computing and web hosting providers, with more than 60,000 customers and 52,000 web sites running on its infrastructure. As the company grew, Redehost needed to automate operations, such as system monitoring, making the operations team more proactive in solving problems. Redehost also sought to improve server uptime, robustness, and availability, especially during backup windows, when performance would often dip. To address the needs of the business, Redehost migrated from the community edition of MySQL to MySQL Enterprise Edition, which has delivered a host of benefits: - Pro-active database management and monitoring using MySQL Enterprise Monitor, enabling Redehost to fulfil customer SLAs. Using the Query Analyzer, Redehost were able to more rapidly identify slow queries, improving customer support - Quadrupled backup speed with MySQL Enterprise Backup, leading to faster data recovery and improved system availability - Reduced DBA overhead by 50% due to the improved support capabilities offered by MySQL Enterprise Edition. - Enabled infrastructure consolidation, avoiding unnecessary energy costs and premature hardware acquisition You can learn more from the full Redehost Case Study Also, take a look at the recently updated MySQL in the Cloud whitepaper for the latest developments that are making it even simpler and more efficient to develop and deploy new services with MySQL in the cloud

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Usage of open source libraries in high governance and risk-averse large organizations (banks, financ

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source dependencies (and also tools). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. I'm also interested in any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Usage of Maven (and open source in general) in high governance and risk-averse large organizations (

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source (such as tools like Maven etc). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. Bonus points for any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • How to store (and use) the current mouse position?

    - by Ben Packard
    What is the best way to store the current mouse position (system-wide) and then (later) put the mouse at that stored point? [NSEvent mouseLocation] gets me the position, and I can move the mouse with a CGEventMouseMoved, but they each use a different co-ordinates system (I believe y=0 is the top for NSEvent and the bottom for a CGEvent). I'm worried about the robustness of capturing the screen height and using it to convert between the two - or is this the best approach?

    Read the article

1 2 3 4  | Next Page >