Search Results

Search found 1367 results on 55 pages for 'cyclomatic complexity'.

Page 8/55 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • VPN - What is the complexity involved setting one up across less than a dozen machines?

    - by lucius
    Hello, I have never set up and configured a VPN. I was wondering what it takes to set one up across windows server 2008 servers. What is the complexity involved? How complicated is it to configure? Do I need to set up a Domain controller as a pre-requisite? I am asking because it appears SQL Server 2008 merge replication can only be set up over the internet using VPN and I am trying to gauge what I am up against. Thanks a lot.

    Read the article

  • What's the best technology for a medium complexity web application?

    - by naveed
    I'm planning to work on a web application of reasonable complexity and am wondering what technology to go with. It will probably start with one person, but there will be 2 or 3 more eventually. My first requirement is to be able to do this as quickly as possible - preferably with as less code as possible. Secondly requirement is that it should be able to scale easily. I have worked with .NET and PHP. So, I am thinking about ASP .NET MVC or CakePHP. It appears to me that CakePHP might be quicker. I did look at Ruby on Rails, but the learning curve is a little steep (which is not an issue if I can be convinced that this is the best tool for the task), I'm not too crazy about the huge number of files generated and I have heard about scalability issues as well as it's applicability to complex situations. I look forward to your opinions on your favorite technology and why.

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • HOWTO: Disable complex password policy on Hyper-V Server 2008?

    - by Ian Boyd
    How do you disable the password complexity requirements on a Microsoft Hyper-V Server 2008 R2? Keep in mind that when you log into the server, the only UI you have is: And you cannot run gpedit.msc: C:\Users\Administrator>gpedit.msc 'gpedit.msc' is not recognized as an internal or external command, operable program or batch file. because there are no .msc snap-ins installed with Microsoft Hyper-V Server 2008 R2. The problem comes when you're trying to add an account to the server, so you can manage it, but it doesn't like most passwords: And, predictably, typing NET HELPMSG 2245 gives you The password does not meet the password policy requirements. Check the minimum p assword length, password complexity and password history requirements. i hoped it would have been a friendly user experience, and either: offered to disable the password policy tell me how to disable the password policy tell me how to check the minimum password length, password complexity and password history requirements. Password Complexity Requirements The Microsoft's default password complexity for Server Core is: Passwords cannot contain the user’s account name or parts of the user’s full name that exceed two consecutive characters. Passwords must be at least six characters in length. Passwords must contain characters from three of the following four categories: 1.English uppercase characters (A through Z). 2.English lowercase characters (a through z). 3.Base 10 digits (0 through 9). 4.Non-alphabetic characters (for example, !, $, #, %). External links Technet Forums: Hyper-V Server disable complex passwords Technet: Passwords must meet complexity requirements of the installed password filter Update: 2k views? So many people keep coming coming to it: up-vote it!

    Read the article

  • WCF: WTF! Does WCF raise the bar or just the complexity level?

    - by rp
    I understand the value of the three-part service/host/client model offered by WCF. But is it just me or does it seem like WCF took something pretty direct and straightforward (the ASMX model) and made a mess out of it? Is there an alternative to using SvcUtil's command line step back in time to generate the proxy? With ASMX services a test harness was automatically provided; is there a good alternative today with WCF? I appreciate that the WS* stuff is more tightly integrated with WCF and hope to find some payoff for WCF there, but geeze, otherwise I'm perplexed. Also, the state of books available for WCF is abysmal at best. Juval Lowy, a superb author, has written a good O'Reilly reference book "Programming WCF Services" but it doesn't do that much (for me anyway) for learning now to use WCF. That book's precursor (and a little better organized, but not much, as a tutorial) is Michele Leroux Bustamante's Learning WCF. It has good spots but is outdated in place and its corresponding Web site is gone. Do you have good WCF learning references besides just continuing to Google the bejebus out of things? Thanks, rp

    Read the article

  • Big O and Little o

    - by hyperdude
    If algorithm A has complexity O(n) and algorithm B has complexity o(n^2), what, if anything, can we say about the relationship between A and B? Note: the complexity of A is expressed using big-Oh, and the complexity of B is expressed using little-Oh.

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • What is the time complexity of LinkedList.getLast() in Java?

    - by i.
    I have a private LinkedList in a Java class & will frequently need to retrieve the last element in the list. The lists need to scale, so I'm trying to decide whether I need to keep a reference to the last element when I make changes (to achieve O(1)) or if the LinkedList class does that already with the getLast() call. What is the big-O cost of LinkedList.getLast() and is it documented? (i.e. can I rely on this answer or should I make no assumptions & cache it even if it's O(1)?)

    Read the article

  • New Thinking for Supply Chain Analytics. PLM for Process. And Untangling Services Complexity.

    - by David Hope-Ross
    The first edition of the quarterly Oracle Information InDepth Value Chain and Procurement Transformation newsletter has just been published. It’s a solid round-up of news and analysis from the fast-moving world of global supply chains and supply management.  As the title of this post implies, the latest edition covers a wide array of great topics. But the story on supply chain analytics from Endeca is especially interesting. Without giving away the ending, it explores new ways of thinking about the value of information and how to exploit it for supply chain improvement. If you enjoy this edition, think about opting-in via the subscription link. It is an easy way to keep up with the latest and greatest.

    Read the article

  • ERP Customizations...Are your CEMLI’s Holding You Back?

    - by Di Seghposs
    Upgrading your Oracle applications can be an intimidating and nerve-racking experience depending on your organization’s level of customizations. Often times they have an on-going effect on your organization causing increased complexity, less flexibility, and additional maintenance cost. Organizations that reduce their dependency on customizations: Reduce complexity by up to 50% Reduce the cost of future maintenance and upgrades  Create a foundation for easier enablement of new product functionality and business value Oracle Consulting offers a complimentary service called Oracle CEMLI Benchmark and Analysis, which is an effective first step used to evaluate your E-Business Suite application CEMLI complexity.  The service will help your organization understand the number of customizations you have, how you rank against your peer groups and identifies target areas for customization reduction by providing a catalogue of customizations by object type, CEMLI ID or Project ID and Business Process. Whether you’re currently deployed on-premise, managed private cloud or considering a move to the cloud, understanding your customizations is critical as you begin an upgrade.  Learn how you can reduce complexity and overall TCO with this informative screencast.  For more information or to take advantage of this complimentary service today, contact Oracle Consulting directly at [email protected]

    Read the article

  • Linked lists in Java - Help with writing methods

    - by user368241
    Representation of a string in linked lists In every intersection in the list there will be 3 fields : The letter itself. The number of times it appears consecutively. A pointer to the next intersection in the list. The following class CharNode represents a intersection in the list : public class CharNode { private char _data; private int _value; private charNode _next; public CharNode (char c, int val, charNode n) { _data = c; _value = val; _next = n; } public charNode getNext() { return _next; } public void setNext (charNode node) { _next = node; } public int getValue() { return _value; } public void setValue (int v) { value = v; } public char getData() { return _data; } public void setData (char c) { _data = c; } } The class StringList represents the whole list : public class StringList { private charNode _head; public StringList() { _head = null; } public StringList (CharNode node) { _head = node; } } Add methods to the class StringList according to the details : (I will add methods gradually according to my specific questions) (Pay attention, these are methods from the class String and we want to fulfill them by the representation of a string by a list as explained above) Pay attention to all the possible error cases. Write what is the time complexity and space complexity of every method that you wrote. Make sure the methods you wrote are effective. It is NOT allowed to use ready classes of Java. It is NOT allowed to move to string and use string operations. 1) public int indexOf (int ch) - returns the index in the string it is operated on of the first appeareance of the char "ch". If the char "ch" doesn't appear in the string, returns -1. If the value of fromIndex isn't in the range, returns -1. Here is my try : public int indexOf (int ch) { int count = 0; charNode pos = _head; if (pos == null ) { return -1; } for (pos = _head; pos!=null && pos.getData()!=ch; pos = pos.getNext()) { count = count + pos.getValue(); } if (pos==null) return -1; return count; } Time complexity = O(N) Space complexity = O(1) EDIT : I have a problem. I tested it in BlueJ and if the char ch doesn't appear it returns -1 but if it does, it always returns 0 and I don't understand why... I am confused. How can the compiler know that the value is the number of times the letter appears consecutively? Can I assume this because its given on the question or what? If it's true and I can assume this, then my code should be correct right? Ok I just spoke with my instructor and she said it isn't required to write it in the exercise but in order for me to test that it indeed works, I need to open a new class and write a code for making a list so that the the value of every node is the number of times the letter appears consecutively. Can someone please assist me? So I will copy+paste to BlueJ and this way I will be able to test all the methods. Meanwhile I am moving on to the next methods. 2) public int indexOf (int ch, int fromIndex) - returns the index in the string it is operated on of the first appeareance of the char "ch", as the search begins in the index "fromIndex". If the char "ch" doesn't appear in the string, returns -1. If the value of fromIndex doesn't appear in the range, returns -1. Here is my try: public int indexOf (int ch, int fromIndex) { int count = 0, len=0, i; charNode pos = _head; CharNode cur = _head; for (pos = _head; pos!=null; pos = pos.getNext()) { len = len+1; } if (fromIndex<0 || fromIndex>=len) return -1; for (i=0; i<fromIndex; i++) { cur = cur.getNext(); } if (cur == null ) { return -1; } for (cur = _head; cur!=null && cur.getData()!=ch; cur = cur.getNext()) { count = count + cur.getValue(); } if (cur==null) return -1; return count; } Time complexity = O(N) ? Space complexity = O(1) 3) public StringList concat (String str) - returns a string that consists of the string that it is operated on and in its end the string "str" is concatenated. Here is my try : public StringList concat (String str) { String str = ""; charNode pos = _head; if (str == null) return -1; for (pos = _head; pos!=null; pos = pos.getNext()) { str = str + pos.getData(); } str = str + "str"; return str; } Time complexity = O(N) Space complexity = O(1)

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

  • Gawker Passwords

    - by Nick Harrison
    There has been much news about the hack of the Gawker web sites. There has even been an analysis of the common passwords found. This list is embarrassing in many ways. The most common password was "123456". The second most common password was "password". Much has also been written providing advice on how to create good passwords. This article provides some interesting advice, none of which should be taken. Anyone reading my blog, probably already knows the importance of strong passwords, so I am not going to reiterate the reasons here. My target audience is more the folks defining password complexity requirements. A user cannot come up with a strong password, if we have complexity requirements that don't make sense. With that in mind, here are a few guidelines:  Long Passwords Insist on long passwords. In some cases, you may need to change to allow a long password. I have seen many places that cap passwords at 8 characters. Passwords need to be at least 8 characters minimal. Consider how much stronger the passwords would be if you double the length. Passwords that are 15-20 characters will be that much harder to crack. There is no need to have limit passwords to 8 characters. Don't Require Special Characters Many complexity rules will require that your password include a capital letter, a lower case letter, a number, and one of the "special" characters, the shits above the number keys. The problem with such rules is that the resulting passwords are harder to remember. It also means that you will have a smaller set of characters in the resulting passwords. If you must include one of the 9 digits and one of the 9 "special" characters, then you have dramatically reduced the character set that will make up the final password. Two characters will be one of 10 possible values instead of one of 70. Two additional characters will be one of 26 possible characters instead of a 70 character potential character set. If you limit passwords to 8 characters, you are left with only 7 characters having the full set of 70 potential values. With these character restrictions in place, there are 1.6 x1012 possible passwords. Without these special character restrictions, but allowing numbers and special characters, you get a total of 5.76x1014 possible passwords. Even if you only allowed upper and lower case characters, you will still have 2.18X1014 passwords. You can do the math any number of ways, requiring special characters will always weaken passwords. Now imagine the number of passwords when you require more than 8 characters.  If you are responsible for defining complexity rules, I urge you to take these guidelines into account. What other guidelines do you follow?

    Read the article

  • Making Room for Innovation — Oracle Interactive eBook

    - by Javier Puerta
    Innovation and complexity are two critical topics on the minds of business leaders. Innovation is what gives them a competitive edge; increased complexity is their greatest challenge. Learn how Oracle is helping customers change the game and make room for innovation by simplifying IT. Access the new Oracle interactive e-book, “Simplify IT and Unleash Innovation”. You can download it here.

    Read the article

  • Making Room for Innovation — Oracle Interactive eBOOK

    - by Cinzia Mascanzoni
    Innovation and complexity are two critical topics on the minds of business leaders. Innovation is what gives them a competitive edge; increased complexity is their greatest challenge. Learn how Oracle is helping customers change the game and make room for innovation by simplifying IT. Access the new Oracle interactive e-book, “Simplify IT and Unleash Innovation” by inviting partners to download it here.

    Read the article

  • Making Room for Innovation - Oracle Interactive eBook

    - by Javier Puerta
    Innovation and complexity are two critical topics on the minds of business leaders. Innovation is what gives them a competitive edge; increased complexity is their greatest challenge. Learn how Oracle is helping customers change the game and make room for innovation by simplifying IT. Access the new Oracle interactive e-book, “Simplify IT and Unleash Innovation”. You can download it here.

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • links for 2010-12-20

    - by Bob Rhubart
    Oracle BI Applications - Security "I recently had to dig into the standard Oracle BI Applications Security Oracle delivers out of the box. The clients had two security requirements..." - Daan Bakboord (tags: oracle security businessintelligence) Changing DataSource Details Using WLST (Multiple Domains) Jay Sensharma shares a script that will make it "easy for WebLogic Administrator to change all the DataSource UserName and Passwords." (tags: weblogic oracle wlst) Richard Veryard on Architecture: Complexity and Power 2 "Power and complexity are higher-order examples of so-called non-functional requirements. Architects need to be able to reason about the composition and decomposition of non-functional requirements." - Richard Veryard (tags: entarch complexity enterprisearchitecture) Anti-Search patterns - SQL to look for what is NOT there - Part One Oracle ACE Director Lucas Jellema discusses a number of situations in which "you are looking for records that do not exist" and demonstrates several "anti-queries." (tags: oracle otn oracleace sql) SOA & Middleware: Canceling a running composite in SOA Suite 11g Niall Commiskey offers a simple scenario. (tags: oracle soa) SOA Design Patterns in the Cloud | SOA World Magazine Srinivasan Sundara Raja attempts to clear up the "confusion in the air about the applicability of SOA in a Cloud managed environment and whether Cloud is the next generation of SOA." (tags: oracle soa cloud) Mark Nelson: Using WebLogic as a Load Balancer "There are a number of good options available to set up a software load balancer in the test environment," says Mark Nelson. "In this post, we will explore one such option – using the HTTP Cluster Servlet that is included with WebLogic Server." (tags: weblogic oracle otn)

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • Euler Problem 1 : Code Optimization / Alternatives [on hold]

    - by Sudhakar
    I am new bee into the world of Datastructures and algorithms from ground up. This is my attempt to learn. If the question is very plain/simple . Please bear with me. Problem: Find the sum of all the multiples of 3 or 5 below 1000. Code i worte: package problem1; public class Problem1 { public static void main(String[] args) { //******************Approach 1**************** long start = System.currentTimeMillis(); int total = 0; int toSubtract = 0; //Complexity N/3 int limit = 10000; for(int i=3 ; i<limit ;i=i+3){ total = total +i; } //Complexity N/5 for(int i=5 ; i<limit ;i=i+5){ total = total +i; } //Complexity N/15 for(int i=15 ; i<limit ;i=i+15){ toSubtract = toSubtract +i; } //9N/15 = 0.6 N System.out.println(total-toSubtract); System.out.println("Completed in "+(System.currentTimeMillis() - start)); //******************Approach 2**************** for(int i=3 ; i<limit ;i=i+3){ total = total +i; } for(int i=5 ; i<limit ;i=i+5){ if ( 0 != (i%3)) total = total +i; } } } Question 1 - Which best approach from the above code and why ? 2 - Are there any better alternatives ?

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >