Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 459/886 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Breaking through the class sealing

    - by Jason Crease
    Do you understand 'sealing' in C#?  Somewhat?  Anyway, here's the lowdown. I've done this article from a C# perspective, but I've occasionally referenced .NET when appropriate. What is sealing a class? By sealing a class in C#, you ensure that you ensure that no class can be derived from that class.  You do this by simply adding the word 'sealed' to a class definition: public sealed class Dog {} Now writing something like " public sealed class Hamster: Dog {} " you'll get a compile error like this: 'Hamster: cannot derive from sealed type 'Dog' If you look in an IL disassembler, you'll see a definition like this: .class public auto ansi sealed beforefieldinit Dog extends [mscorlib]System.Object Note the addition of the word 'sealed'. What about sealing methods? You can also seal overriding methods.  By adding the word 'sealed', you ensure that the method cannot be overridden in a derived class.  Consider the following code: public class Dog : Mammal { public sealed override void Go() { } } public class Mammal { public virtual void Go() { } } In this code, the method 'Go' in Dog is sealed.  It cannot be overridden in a subclass.  Writing this would cause a compile error: public class Dachshund : Dog { public override void Go() { } } However, we can 'new' a method with the same name.  This is essentially a new method; distinct from the 'Go' in the subclass: public class Terrier : Dog { public new void Go() { } } Sealing properties? You can also seal seal properties.  You add 'sealed' to the property definition, like so: public sealed override string Name {     get { return m_Name; }     set { m_Name = value; } } In C#, you can only seal a property, not the underlying setters/getters.  This is because C# offers no override syntax for setters or getters.  However, in underlying IL you seal the setter and getter methods individually - a property is just metadata. Why bother sealing? There are a few traditional reasons to seal: Invariance. Other people may want to derive from your class, even though your implementation may make successful derivation near-impossible.  There may be twisted, hacky logic that could never be second-guessed by another developer.  By sealing your class, you're protecting them from wasting their time.  The CLR team has sealed most of the framework classes, and I assume they did this for this reason. Security.  By deriving from your type, an attacker may gain access to functionality that enables him to hack your system.  I consider this a very weak security precaution. Speed.  If a class is sealed, then .NET doesn't need to consult the virtual-function-call table to find the actual type, since it knows that no derived type can exist.  Therefore, it could emit a 'call' instead of 'callvirt' or at least optimise the machine code, thus producing a performance benefit.  But I've done trials, and have been unable to demonstrate this If you have an example, please share! All in all, I'm not convinced that sealing is interesting or important.  Anyway, moving-on... What is automatically sealed? Value types and structs.  If they were not always sealed, all sorts of things would go wrong.  For instance, structs are laid-out inline within a class.  But what if you assigned a substruct to a struct field of that class?  There may be too many fields to fit. Static classes.  Static classes exist in C# but not .NET.  The C# compiler compiles a static class into an 'abstract sealed' class.  So static classes are already sealed in C#. Enumerations.  The CLR does not track the types of enumerations - it treats them as simple value types.  Hence, polymorphism would not work. What cannot be sealed? Interfaces.  Interfaces exist to be implemented, so sealing to prevent implementation is dumb.  But what if you could prevent interfaces from being extended (i.e. ban declarations like "public interface IMyInterface : ISealedInterface")?  There is no good reason to seal an interface like this.  Sealing finalizes behaviour, but interfaces have no intrinsic behaviour to finalize Abstract classes.  In IL you can create an abstract sealed class.  But C# syntax for this already exists - declaring a class as a 'static', so it forces you to declare it as such. Non-override methods.  If a method isn't declared as override it cannot be overridden, so sealing would make no difference.  Note this is stated from a C# perspective - the words are opposite in IL.  In IL, you have four choices in total: no declaration (which actually seals the method), 'virtual' (called 'override' in C#), 'sealed virtual' ('sealed override' in C#) and 'newslot virtual' ('new virtual' or 'virtual' in C#, depending on whether the method already exists in a base class). Methods that implement interface methods.  Methods that implement an interface method must be virtual, so cannot be sealed. Fields.  A field cannot be overridden, only hidden (using the 'new' keyword in C#), so sealing would make no sense.

    Read the article

  • I can't do "sudo"

    - by Klevin92
    Let's describe it from the beginning: I was planning to re-enable the password requirement in LightDM for security reasons. But, since my PC's been sluggish these times, it FC'd the password setup when I was entering and now I can't enter it even with combinatorics. I have followed the tips in the Help page, but with all of them I have issues: I try to enter recovery mode (so that I type passwd and my name and change it), but it is a black screen just like my boot screen (because of nVidia graphic card compatibility issue), then I can't do anything I also tried the editting "shadow" file, but the guide talks about some commas that I just don't see where they are supposed to be. I even tried deletting the keyring file like it's said, but nothing happens (except that I lose the other passwords) So is there anything I can do to have my password back? (a bonus would be stopping all this sluggish, apps not responding, etc)

    Read the article

  • Multiple vulnerabilities in Oracle Java Web Console

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-0534 Resource Management Errors vulnerability 5.0 Apache Tomcat Solaris 10 SPARC: 147673-04 X86: 147674-04 CVE-2011-1184 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-2204 Information Exposure vulnerability 1.9 CVE-2011-2526 Improper Input Validation vulnerability 4.4 CVE-2011-2729 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-3190 Permissions, Privileges, and Access Controls vulnerability 7.5 CVE-2011-3375 Information Exposure vulnerability 5.0 CVE-2011-4858 Resource Management Errors vulnerability 5.0 CVE-2011-5062 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-5063 Improper Authentication vulnerability 4.3 CVE-2011-5064 Cryptographic Issues vulnerability 4.3 CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Lower Your Application Infrastructure Costs w/Oracle Database 11g

    - by john.brust
    Oracle Database 11g is designed to support enterprise applications, including Oracle E-Business Suite, Oracle PeopleSoft, and Oracle Siebel. And every Oracle customer can benefit from the performance, reliability, and security that Oracle Database 11g brings to these applications. Plus, Oracle Database 11g, helps you drive down your IT infrastructure costs. Join us next Friday for a webcast conversation with database expert Mark Townsend, Vice President of Oracle's Server Technology Division, to learn how you can benefit from running your applications on Oracle Database 11g. At the end of the presentation, we'll open up for live Q&A for approximately 30 minutes. Register now for our Friday, April 23rd, 2010 9:30am PT | 12:30pm ET live webcast.

    Read the article

  • Multiple Denial of Service vulnerabilities in libpng

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5266 Denial of Service (DoS) vulnerability 4.3 PNG reference library (libpng) Solaris 10 SPARC: 137080-03 X86: 137081-03 Solaris 9 SPARC: 139382-02 114822-06 X86: 139383-02 Solaris 8 SPARC: 114816-04 X86: 114817-04 CVE-2007-5267 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5268 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5269 Denial of Service (DoS) vulnerability 5.0 CVE-2008-1382 Denial of Service (DoS) vulnerability 7.5 CVE-2008-3964 Denial of Service (DoS) vulnerability 4.3 CVE-2009-0040 Denial of Service (DoS) vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • ING: Scaling Role Management and Access Certification to Thousands of Applications

    - by Tanu Sood
    Organizations deal with employee and user access certifications in different ways.  There’s collation of multiple spreadsheets, an intense two-week exercise by managers or use of access certification tools to do so across a handful of applications. But for most organizations compliance is about certifying user access for thousands of employees across hundreds of systems. Managing and auditing millions of entitlement combinations on a periodic basis poses a huge scale challenge. ING solved the compliance scale challenge using an Identity Platform approach. Join the live webcast featuring ING’s enterprise architect, Mark Robison, as he discusses how a platform approach offers value that is greater than the sum of its parts and enables ING to successfully meet their security and compliance goals. Mark will also share his implementation experiences and discuss the key requirements to manage the complexity and scale of access certification efforts at ING. Mark will be joined by Neil Gandhi, Principal Product Manager for Oracle Identity Analytics. Live WebcastING: Scaling Role Management and Access Certification to Thousands of ApplicationsWednesday, April 11th at 10 am Pacific/ 1 pm EasternRegister Today

    Read the article

  • ADF Sessions at RMOUG this week

    - by shay.shmeltzer
    If you are attending the RMOUG conference this week, you might be interested in checking out some of the sessions we are doing about Oracle ADF:Lynn is delivering:The Fusion Development Platform - Wed at 9:00 (404)Put Your Good Taste Into Action: How to Skin ADF Faces Rich Client Applications - Wed 5:00 (4 c/d)Shay is delivering:From SQL to Rich Web Data Visualization - The Fast Route - Thu at 9:00 (404)Adding Mobile and Web 2.0 UIs to Existing Applications - The Fusion Way - Thu at 10:15 (404)There are also lots of ADF related sessions delivered by customers and partners including:Drinking the Kool-Aid - My Journey to Becoming an ADF BelieverCase Study: Performance Tuning New ADF Applications Using Oracle Application Testing Suite (ATS)Oracle ADF & JDeveloper: Coming of AgeHello Worldwide Web: Your First JSF in JDeveloperMore details see the schedule here.If you are using ADF already, please drop by and let us know what you think. We are always looking for user feedback.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • How do I resolve "No JSON object could be decoded" on mythbuntu live CD?

    - by Neil
    I have been running a MythTV frontend on my laptop for some time against a MythTV backend installed in Linux Mint 12 on another computer, and everything works fine. Now, I'm trying out the Mythbuntu Live CD (12.04.1 32-bit) on the laptop, to turn it into a dedicated front end. It's connecting to the network just fine, and I can see my server. When I click on the frontend icon on the desktop, it asks me for the security code, which I've verified against mythtv-setup on the server. However, when I test that code, it shows the error message "No JSON object could be decoded". I've looked in the control center to see if there's something else I should be setting up. The message above implies to me that it can't find the server, but I can find no place in the control center to tell it where to find my myth backend, which I find a little odd. Does the live CD not work against a backend server on another machine?

    Read the article

  • Interfaces Reference Model available

    - by ACShorten
    With the implementation of an Oracle Utilities Application Framework based products, you can implement other Oracle technologies to augment your solution. There is a whitepaper available now to outline all the technology integrations possible with various versions of the Oracle Utilities Application Framework. The whitepaper outlines the possible integrations and implementations of other Oracle technologies to address customer requirements in association with Oracle Utilities Application Framework based products. The whitepaper covers a vast range of products including: Oracle Fusion Middleware Oracle SOA Suite Oracle Identity Management Suite Oracle ExaData and Oracle ExaLogic Oracle VM Data Options including Real Application Clustering, Real Application Testing, Data Guard/Active Data Guard, Compression, Partitioning, Database Vault, Audit Vault etc.. The whitepaper contains a summary of the integration solution possibilities, links to further information including product specific interfaces. The whitepaper is available from My Oracle Support at KB Id: 1506855.1

    Read the article

  • Synaptic returns error

    - by donvoldy666
    I get the following error message when I run synaptic and I can't install any programs SystemError: W:Ignoring file 'getdeb.list.bck' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension, W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ ' precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry ' 'http://extras.ubuntu.com/ubuntu'/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry' 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/ppa.launchpad.net_ubuntu-mozilla-security_ppa_ubuntu_dists_precise_main_binary-i386_Packages), E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/packages.rssowl.org_ubuntu_dists_precise_main_i18n_Translation-en, E:The package lists or status file could not be parsed or opened. I'm new to Linux so please help me out .

    Read the article

  • Generate Multiple Choice Questions [closed]

    - by Daniel
    I'm working on a quiz application that will have a number of multiple choice questions. I'm waiting on the content from the client, but I'm hoping to start testing with some placeholder data for the time being. In order for the tests to be worthwhile, I probably need at least 100 multiple choice questions. I wanted to see if anyone knows of a resource or tool that can generate questions/multiple choice answers or propose another creative way to fill my quiz application with placeholder content. Ultimately the data will be in a MySQL database, but I don't really care what format the sample data is in (Excel, Word, JSON, etc.).

    Read the article

  • How to create an Access database by using ADOX and Visual C# .NET

    - by SAMIR BHOGAYTA
    Build an Access Database 1. Open a new Visual C# .NET console application. 2. In Solution Explorer, right-click the References node and select Add Reference. 3. On the COM tab, select Microsoft ADO Ext. 2.7 for DDL and Security, click Select to add it to the Selected Components, and then click OK. 4. Delete all of the code from the code window for Class1.cs. 5. Paste the following code into the code window: using System; using ADOX; private void btnCreate_Click(object sender, EventArgs e) { ADOX.CatalogClass cat = new ADOX.CatalogClass(); cat.Create("Provider=Microsoft.Jet.OLEDB.4.0;" +"Data Source=D:\\NewMDB.mdb;" +"Jet OLEDB:Engine Type=5"); MessageBox.Show("Database Created Successfully"); cat = null; }

    Read the article

  • Creating a secure SQL server login - CHECK_EXPIRATION & CHECK_POLICY

    - by cabhilash
    In SQL Server you can create users using T-SQL or using the options provided by SQL Server Management Studio.   CREATE LOGIN sql_user WITH PASSWORD ='sql_user_password' MUST_CHANGE, DEFAULT_DATABASE = defDB, CHECK_EXPIRATION = ON, CHECK_POLICY = ONAs mentioned in the previous article (http://weblogs.asp.net/cabhilash/archive/2010/04/07/login-failed-for-user-sa-because-the-account-is-currently-locked-out-the-system-administrator-can-unlock-it.aspx) when CHECK_POLICY = ON user account follows the password rules provided in the system on which the SQL server is installed. When MUST_CHANGE keyword is used user is forced to change the password when he/she tries to login for the first time. CHECK_EXPIRATION and CHECK_POLICY are only enforced on Windows Server 2003 and later. If you want to turn off the password expiration enforcement or security policy enforcement, you can do by using the following statements. (But these wont work if you have created your login with MUST_CHANGE and user didn't change the default password) ALTER LOGIN sql_login WITH CHECK_EXPIRATION = OFF go ALTER LOGIN sql_login WITH CHECK_POLICY = OFF

    Read the article

  • How to price code reviews to encourage good behavior?

    - by Chris Clark
    I work for a company that has a hosted .net internet application with many clients. Those clients often want to write customizations for our application. We have APIs to hook into the app, but the customizations themselves are written in .net. This is a shared, secure hosting environment and we have to code review these customizations before we can deploy them in our datacenter to ensure that they don't degrade performance, crash our servers, or open any security vulnerabilities. We charge for these code reviews. The current pricing model is simply a function of the number of lines of code. I think this is a bad idea for a variety of reasons, but primarily because, if we are interested in verifying that the code works as expected, we should be incentivizing good, readable code, not compaction. I would like to propose a pricing model that incorporates some, or all of the following as inputs: Lines of code Cyclomatic complexity Avg function length # of functions Are there any other metrics I should incorporate, or other ideas for how we can reasonably create pricing for code reviews that encourages safe and understandable code?

    Read the article

  • What are the most important concepts to understand for "fluency in developer English"?

    - by Edward Tanguay
    In April, I'm going to be giving a talk called **English 2.0 - Understanding the Language of Developers" to a group of English teachers. The purpose is in two hours to give them a quick background in key concepts so that they can better understand developer blogs and podcasts and are able to ask better questions when talking to developers. What do you think are the most important concepts to understand, concepts that developers take for granted but the general public is not familiar with? Here are a few ideas: version control abstractions pub/sub push vs. pull debugging modularity three-tier architecture class/object "spaghetti code" vs. OOP exception throwing crowd sourcing refactoring the cloud DRY - don't repeat yourself client/server unit testing designer/developer

    Read the article

  • Saving and Loading the Game (Automatically or Manually) via Internal Storage Only (Tablet PC Issues)

    - by David Dimalanta
    Here is my question. When making a game app for Android, I considered first the device. It's no problem to save progress everything (from levels to records) on a smartphone because it has an SD Card slot. Exception to this, the tablet PC, it can really nothing but on internal only storage. For example, I'm using this tutorial for audio spectrum (see http://www.youtube.com/watch?v=5cN1VzZXcdo) that involves copying from internal to external in order to detect frequency. It works on the desktop but not on the Android device (Tablets only [i.e. Google Nexus Tablet]). Is there a way to optimize save/load game problems due to internal/external device issues? Plus, additionally, what's the reason why my device won't work on tablets, except the desktop, while testing the audio spectrum code and why? Also, is it the same with saving/loading game?

    Read the article

  • How to use Google Analytics to track a development and production versions of the same site on different servers?

    - by Abe
    I have a website with two versions, one for production and one for development (testing new features). All of the code is under version control and the websites are on separate servers. Currently, I have the same Google Analytics Tracking code used on both sites. Since the code is under version control, it would be ideal to either have an if I am on production, use this code; else if on development server use that code clause. But I suspect that Google makes it easier to do something like this. I see that there are many ways to configure a GA tracking code, e.g. "a single domain" vs. "multiple top level domains". But it is not clear to me how to set this up. Also, if tracking code configured for a single domain has been on the development server, have I been picking up traffic to both sites, or does GA just ignore the second domain that I haven't registered?

    Read the article

  • apt-get update bzip2 errors

    - by Tejas Kale
    I installed Ubuntu 11.10 today on my Lenovo w500. After that when i tried running sudo apt-get update This is the error i am getting. Get:117 http://ftp.jaist.ac.jp oneiric-security/universe TranslationIndex [73 B] 99% [48 Sources bzip2 0 B] [22 Sources bzip2 5,294 kB] 1,983 kB/s 0s bzip2: Compressed file ends unexpectedly; perhaps it is corrupted? *Possible* reason follows. bzip2: Inappropriate ioctl for device Input file = (stdin), output file = (stdout) It is possible that the compressed file(s) have become corrupted. You can use the -tvv option to test integrity of such files. You can use the `bzip2recover' program to attempt to recover data from undamaged sections of corrupted files. I found the following similar question : Errors while updating Ubuntu 11.10 , But the solutions mentioned ( changing the download server, running apt-get clean, apt-get autoclean) and have also tried removing the /var/cache/apt/archives/lists direcotry. As a result of this, I am unable to install any new packages.

    Read the article

  • AWS EC2 Oracle RDB connection to Oracle Database Instance

    - by llaszews
    Provisioning my Oracle database instance to AWS EC2 RDB was easy. Just a few clicks! However, getting my connection to my Oracle cloud database was not as easy. A couple things that are not obvious (using Oracle SQL Developer): 1. Need to set up a database security group. 2. Need to use end point for the host name. This video is the best one on the internet to explain both points: http://www.youtube.com/watch?v=ocFURuX0eEw

    Read the article

  • "AND Operator" in PAM

    - by d_inevitable
    I need to prevent users from authenticating through Kerberos when the encrypted /home/users has not yet been mounted. (This is to avoid corrupting the ecryptfs mountpoint) Currently I have these lines in /etc/pam.d/common-auth: auth required pam_group.so use_first_pass auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 try_first_pass auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass I am planning to use pam_exec.so to execute a script that will exit 1 if the ecyptfs mounts are not ready yet. Doing this: auth required pam_exec.so /etc/security/check_ecryptfs will lock me out for good if ecryptfs for some reason fails. In such case I would like to at least be able to login with a local (non-kerberos) user to fix the issue. Is there some sort of AND-Operator in which I can say that login through kerberos+ldap is only sufficient if both kerberos authentication and the ecryptfs mount has succeeded?

    Read the article

  • Wireless not working, no driver showing in the Additional Drivers window

    - by edit lopez
    I am a new user of Ubuntu. I have a Asus Q501L laptop that came preinstalled with Windows 8, but I wanted to move from Windows and try something new, so I just decided to install Ubuntu without thinking to much about it. The problem I have is that I can't install the additional drivers. When I go to to Additional Drivers nothing appears, it just says: no proprietary drivers are in use and in small letters continues: a proprietary driver has private code that Ubuntu developers can't review and improve. security and others updates are dependent on drivers vendor.. I can't even use the wireless connection. I really don't know what to do. I tried to download the drivers from Asus, but when I tried to install them it said: an error occurred while loading the archive. Also I don't know the model of the PC wireless card. If there is something I can do to find out that please tell me. Thanks!

    Read the article

  • Creating a remote management interface

    - by Johnny Mopp
    I'm looking for info on creating a remote management interface for our software. This is not anything illicit. Our software is for live TV production and once they go on-air we can't access the PC (usually through LogMeIn). I would like to be able to upload/download files and issue commands to our software. The commands would be software specific like "load this file" or "run this script" or "return this value" etc. A socket connection is preferred but the problem is most of our PCs are behind firewalls and NAT servers. I'm not sure where to start. I think HTTP tunneling is the way to go but am wondering if there are other options or recommendations. Also, assume our clients are not willing to open up ports for security reasons. Thanks.

    Read the article

  • How web server choose between unicode and utf-8 for accentued characters?

    - by jacques
    I have a web server with my ISP which replaces in the urls the accentued characters by their unicode values: for instance é (eacute) is translated to %e9 (dec 233). For testing I use locally Easyphp which translate those characters by their utf-8 equivalence: é is then replaced by the well known sequence %c3%a9 (é)... Browsers served by Easyphp don't decode unicode values but they do if running locally (utf-8 and non converted accent also)... I have been unable to find where this behavior is configured in the server. This is a problem as some urls are built by my application using the php rawurlencode() which seems to always encode with unicode values on both servers. Any idea? Thanks in advance.

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >