Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 142/479 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Backup of whole harddrive during full operation with Acronis True Image Home 2010

    - by testing
    Currently I'm creating a backup of one of my hard drives. It's my main hard drive, where the operating system is running on. Because the backup is done during full operation I'm asking me if the backup really includes all files (registry, ...). Can I restore the backup on another hard drive and then run the operating system again without problems? Normally I would say that you have to boot from a CD (without running OS) to make a backup. I made a Google research but I didn't found my case so far.

    Read the article

  • In VirtualBox, how can I access host localhost from guest (Visual Studio Dev Server from IE7 testing VM)?

    - by Seth
    Host OS is Win7 running MyApp in the Visual Studio Development Server, bound to localhost:51227, VM is VirtualBox configured with NAT. Guest OS is Win XP with IE7 installed. My goal is to debug MyApp (running on host) from within IE7 (running on guest). Visual Studio Development server only binds to the loopback network device (i.e. localhost). It does not bind to the external IP address of my host. I've tried access 10.0.2.2:51227 from IE7 on the guest (and confirmed that 10.0.2.2 is the gateway address using ipconfig), but it appears that 10.0.2.2 binds to the external IP of the Host, NOT the loopback IP (localhost), so this does not work. Any suggestions?

    Read the article

  • Reduce size of MP4

    - by testing
    I have a MP4 file with a lengh of 22:44. Here are the details: Video: width: 720 px height: 404 px data bitrate: 1022 kBit/s overall bitrate: 1182 kBit/s fps: 24 codec: H264 - MPEG4 AVC (part 10) (avc1) Audio: bitrate: 159 kBit/s stereo sample rate: 48 kHz codec: MPEG AAC Audio (mp4a) I thought I can reduce the current filesize (about 200 MB) by reducing the width and the height (420 x 236). I tried different programs: Handbrake, Format Factory, Next Video Converter and Super. The first three didn't worked as expected: Handbrake has a bug by setting the width and the height, the next two doesn't allow the fine setting of the videosize (only presets of width and height). Super seems to be the best, but I didn't found a setting which reduces the file size. I reduced the width and the height but only got 20 MB less. Now I tried the xth setting and I still get a too high file size. I want to reduce the filesize to 100 MB or less. The ouput format should be FLV or MP4, because I need this for flowplayer. Which settings of SUPER or which program should I use to reduce the file size? Of course the video should still be viewable.

    Read the article

  • Can I use Cygwin as a replacement for Ubuntu, for bash script testing?

    - by Jeroen De Meerleer
    Next wednesday i'm having an exam on Operating Systems. In this exam there will also be a part bash-scripting. The teacher itself will test the scripts in a Virtual Machine running Ubuntu. Myself, however, I'm having serious troubles with running the latest Ubuntu (14.04 LTS) on a Virtual Machine (there are troubles with gnome running very slow). So I'm thinking about using Cygwin, which is doing the job great for another course. The teacher already confirmed I can use that, but I'm thinking he doesn't know it at all. I've already tested the scripts we made in class and they're all running without errors. But I'm quite sure there are some things I have to mind on. My question: would you use Cygwin as a replacement for the Ubuntu VM? Or should I stick it with the VM (maybe by using a different config/platform).

    Read the article

  • Keep Your Eye on the Ball

    - by [email protected]
    With the FIFA World Cup 2010 in South Africa almost a week underway, the soccer fans all around the World are talking about at least 2 things. That typical vuvuzela sound and the new Jabulani ball, saying it moves unpredictably, is difficult to handle and somehow the altitude of the World Cup stadiums also seem to be a contributing factor.(Picture taken from http://www.flickr.com/photos/warrenski/4143923059/ under a Creative Commons license)Although the FIFA states that it hasn't received any official complaints, the end users don't seem to be very happy with this new ball. This brings me to a comparison with IT management and testing. When you're in a situation where you're introducing a new product, in IT terms, introducing a new application, you would like to test all possible scenarios that your end users could be using and experiencing. However, that's a very time and resource intensive process to do for every application change or update.  It's like getting ready for the big game but you have no game plan.That's why a new approach has been developed. One that's based on the 80/20 rule. Testing 80% of the application will cost about 20% of the efforts. The remaining 20% of your application will not be tested before deployment, but monitored with a real user monitoring solution immediately after deployment. These tools track all user experiences, including error messages and the performance and availability metrics from an end user perspective. Should any anomaly occur, you would be able to repair it quickly so you and your end users can get back into the game.These real user sessions can be easily converted into testing scripts, so the 80% of the application testing can be complimented with the remaining 20%.Oracle Enterprise Manager 11g group of products offers both the real user monitoring solution with Oracle Real User Experience Insight, as well as the required testing solution with Oracle Application Testing Suite. Visit our Oracle Enterprise Manager 11g resource center and find out how it's Business-Driven IT Management approach will help you keep your eye on your business ball.Happy World Cup.

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • What is the prefered or accepted method for testing proxy settings?

    - by Mike Webb
    I have a lot of trouble with the internet connectivity in the program I am working on and it all seems to spawn from some issue with the proxy settings. Most of the issues at this point are fixed, but the issue I am having now is that my method of testing the proxy settings makes some users wait for long periods of time. Here is what I do: System.Net.WebClient webClnt = new System.Net.WebClient(); webClnt.Proxy = proxy; webClnt.Credentials = proxy.Credentials; byte[] tempBytes; try { tempBytes = webClnt.DownloadData(url.Address); } catch { //Invalid proxy settings //Code to handle the exception goes here } This is the only way that I've found to test if the proxy settings are correct. I tried making a web service call to our web service, but no proxy settings are needed when making the call. It will work even if I have bogus proxy settings. The above method, though, has no timeout member that I can set that I can find and I use the DownloadData as opposed to the DownloadDataAsync because I need to wait til the method is done so that I can know if the settings are correct before continuing on in the program. Any suggestions on a better method or a work around for this method is appreciated. Mike

    Read the article

  • Unit Testing (xUnit) an ASP.NET Mvc Controller with a custom input model?

    - by Danny Douglass
    I'm having a hard time finding information on what I expect to be a pretty straightforward scenario. I'm trying to unit test an Action on my ASP.NET Mvc 2 Controller that utilizes a custom input model w/ DataAnnotions. My testing framework is xUnit, as mentioned in the title. Here is my custom Input Model: public class EnterPasswordInputModel { [Required(ErrorMessage = "")] public string Username { get; set; } [Required(ErrorMessage = "Password is a required field.")] public string Password { get; set; } } And here is my Controller (took out some logic to simplify for this ex.): [HttpPost] public ActionResult EnterPassword(EnterPasswordInputModel enterPasswordInput) { if (!ModelState.IsValid) return View(); // do some logic to validate input // if valid - next View on successful validation return View("NextViewName"); // else - add and display error on current view return View(); } And here is my xUnit Fact (also simplified): [Fact] public void EnterPassword_WithValidInput_ReturnsNextView() { // Arrange var controller = CreateLoginController(userService.Object); // Act var result = controller.EnterPassword( new EnterPasswordInputModel { Username = username, Password = password }) as ViewResult; // Assert Assert.Equal("NextViewName", result.ViewName); } When I run my test I get the following error on my test fact when trying to retrieve the controller result (Act section): System.NullReferenceException: Object reference not set to an instance of an object. Thanks in advance for any help you can offer!

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • How do you create a MANIFEST.MF that's available when you're testing and running from a jar in produ

    - by warvair
    I've spent far too much time trying to figure this out. This should be the simplest thing and everyone who distributes Java applications in jars must have to deal with it. I just want to know the proper way to add versioning to my Java app so that I can access the version information when I'm testing, e.g. debugging in Eclipse and running from a jar. Here's what I have in my build.xml: <target name="jar" depends = "compile"> <property name="version.num" value="1.0.0"/> <buildnumber file="build.num"/> <tstamp> <format property="TODAY" pattern="yyyy-MM-dd HH:mm:ss" /> </tstamp> <manifest file="${build}/META-INF/MANIFEST.MF"> <attribute name="Built-By" value="${user.name}" /> <attribute name="Built-Date" value="${TODAY}" /> <attribute name="Implementation-Title" value="MyApp" /> <attribute name="Implementation-Vendor" value="MyCompany" /> <attribute name="Implementation-Version" value="${version.num}-b${build.number}"/> </manifest> <jar destfile="${build}/myapp.jar" basedir="${build}" excludes="*.jar" /> </target> This creates /META-INF/MANIFEST.MF and I can read the values when I'm debugging in Eclipse thusly: public MyClass() { try { InputStream stream = getClass().getResourceAsStream("/META-INF/MANIFEST.MF"); Manifest manifest = new Manifest(stream); Attributes attributes = manifest.getMainAttributes(); String implementationTitle = attributes.getValue("Implementation-Title"); String implementationVersion = attributes.getValue("Implementation-Version"); String builtDate = attributes.getValue("Built-Date"); String builtBy = attributes.getValue("Built-By"); } catch (IOException e) { logger.error("Couldn't read manifest."); } } But, when I create the jar file, it loads the manifest of another jar (presumably the first jar loaded by the application - in my case, activation.jar). Also, the following code doesn't work either although all the proper values are in the manifest file. Package thisPackage = getClass().getPackage(); String implementationVersion = thisPackage.getImplementationVersion(); Any ideas?

    Read the article

  • BizTalk 2009 - Architecture Decisions

    - by StuartBrierley
    In the first step towards implementing a BizTalk 2009 environment, from development through to live, I put forward a proposal that detailed the options available, as well as the costs and benefits associated with these options, to allow an informed discusion to take place with the business drivers and budget holders of the project.  This ultimately lead to a decision being made to implement an initial BizTalk Server 2009 environment using the Standard Edition of the product. It is my hope that in the long term, as projects require it and allow, we will be looking to implement my ideal recommendation of a multi-server enterprise level environment, but given the differences in cost and the likely initial work load for the environment this was not something that I could fully recommend at this time.  However, it must be noted that this decision was made in full awareness of the limits of the standard edition, and the business drivers of this project were made fully aware of the risks associated with running without the failover capabilities of the enterprise edition. When considering the creation of this new BizTalk Server 2009 environment, I have also recommended the creation of the following pre-production environments:   Usage Environment Development Development of solutions; Unit testing against technical specifications; Initial load testing; Testing of deployment packages;  Visual Studio; BizTalk; SQL; Client PCs/Laptops; Server environment similar to Live implementation; Test Testing of Solutions against business and technical requirements;  BizTalk; SQL; Server environment similar to Live implementation; Pseudo-Live As Live environment to allow testing against Live implementation; Acts as back-up hardware in case of failure of Live environment; BizTalk; SQL; Server environment identical to Live implementation; The creation of these differing environments allows for the separation of the various stages of the development cycle.  The development environment is for use when actively developing a solution, it is a potentially volatile environment whose state at any given time can not be guaranteed.  It allows developers to carry out initial tests in an environment that is similar to the live environment and also provides an area for the testing of deployment packages prior to any release to the test environment. The test environment is intended to be a semi-volatile environment that is similar to the live environment.  It will change periodically through the development of a solution (or solutions) but should be otherwise stable.  It allows for the continued testing of a solution against requirements without the worry that the environment is being actively changed by any ongoing development.  This separation of development and test is crucial in ensuring the quality and control of the tested solution. The pseudo-live environment should be considered to be an almost static environment.  It should mimic the live environment and can act as back up hardware in the case of live failure.  This environment acts as an area to allow for “as live” testing, where the performance and behaviour of the live solutions can be replicated.  There should be relatively few changes to this environment, with software releases limited to “release candidate” level releases prior to going live. Whereas the pseudo-live environment should always mimic the live environment, to save on costs the development and test servers could be implemented on lower specification hardware.  Consideration can also be given to the use of a virtual server environment to further reduce hardware costs in the development and test environments, indeed this virtual approach can also be extended to pseudo-live and live assuming the underlying technology is in place. Although there is no requirement for the development and test server environments to be identical to live, the overriding architecture implemented should be the same as in live and an understanding must be gained of the performance differences to be expected across the different environments.

    Read the article

  • Git: Create a branch from unstagged/uncommited changes on master

    - by knoopx
    Context: I'm working on master adding a simple feature. After a few minutes I realize it was not so simple and it should have been better to work into a new branch. This always happens to me and I have no idea how to switch to another branch and take all these uncommited changes with me leaving the master branch clean. I supposed git stash && git stash branch new_branch would simply accomplish that but this is what I get: ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ echo "hello!" > testing ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git stash Saved working directory and index state WIP on master: 4402b8c testing HEAD is now at 4402b8c testing ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ git stash branch new_branch Switched to a new branch 'new_branch' # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Dropped refs/stash@{0} (db1b9a3391a82d86c9fdd26dab095ba9b820e35b) ~/test $ git s # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git checkout master M testing Switched to branch 'master' ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Do you know if there is any way of accomplishing this?

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • Maven2 multi-module ejb 3.1 project - deployment error

    - by gerry
    The problem is taht I get the following error qhile deploying my project to Glassfish: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging But, let us start on how the project structure looks like in Maven2... I've build the following scenario: MultiModuleJavaEEProject - parent module - model --- packaged as jar - ejb1 ---- packaged as ebj - ejb2 ---- packaged as ebj - web ---- packaged as war So model, ejb1, ejb2 and web are children/modules of the parent MultiModuleJavaEEProject. _ejb1 depends on model. _ejb2 depends on ejb1. _web depends on ejb2. the pom's look like: _parent: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.dyndns.geraldhuber.testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <packaging>pom</packaging> <version>1.0</version> <name>MultiModuleJavaEEProject</name> <url>http://maven.apache.org</url> <modules> <module>model</module> <module>ejb1</module> <module>ejb2</module> <module>web</module> </modules> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency> </dependencies> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <version>2.2</version> <configuration> <ejbVersion>3.1</ejbVersion> <jarName>${project.groupId}.${project.artifactId}-${project.version}</jarName> </configuration> </plugin> </plugins> </pluginManagement> </build> </project> _model: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>model</artifactId> <packaging>jar</packaging> <version>1.0</version> <name>model</name> <url>http://maven.apache.org</url> </project> _ejb1: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb1</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb1</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>model</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _ejb2: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb2</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb2</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb1</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _web: <?xml version="1.0" encoding="UTF-8"?> <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>MultiModuleJavaEEProject</artifactId> <groupId>testing</groupId> <version>1.0</version> </parent> <groupId>testing</groupId> <artifactId>web</artifactId> <version>1.0</version> <packaging>war</packaging> <name>web Maven Webapp</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb2</artifactId> <version>1.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.0</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> </manifest> </archive> </configuration> </plugin> </plugins> <finalName>web</finalName> </build> </project> And the model is just a simple Pojo: package testing.model; public class Data { private String data; public String getData() { return data; } public void setData(String data) { this.data = data; } } And the ejb1 contains only one STATELESS ejb. package testing.ejb1; import javax.ejb.Stateless; import testing.model.Data; @Stateless public class DataService { private Data data; public DataService(){ data = new Data(); data.setData("Hello World!"); } public String getDataText(){ return data.getData(); } } As well as the ejb2 is only a stateless ejb: package testing.ejb2; import javax.ejb.EJB; import javax.ejb.Stateless; import testing.ejb1.DataService; @Stateless public class Service { @EJB DataService service; public Service(){ } public String getText(){ return service.getDataText(); } } And the web module contains only a Servlet: package testing.web; import java.io.IOException; import java.io.PrintWriter; import javax.ejb.EJB; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import testing.ejb2.Service; public class SimpleServlet extends HttpServlet { @EJB Service service; public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println( "SimpleServlet Executed" ); out.println( "Text: "+service.getText() ); out.flush(); out.close(); } } And the web.xml file in the web module looks like: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <display-name>Archetype Created Web Application</display-name> <servlet> <servlet-name>simple</servlet-name> <servlet-class>testing.web.SimpleServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>simple</servlet-name> <url-pattern>/simple</url-pattern> </servlet-mapping> </web-app> So no further files are set up by me. There is no ejb-jar.xml in any ejb files, because I'm using EJB 3.1. So I think ejb-jar.xml descriptors are optional. I this right? But the problem is, the already mentioned error: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging Can anybody help?

    Read the article

  • Syntax error on running a batch file to replace files

    - by Ralph
    I have a batch file intended to replace all instances of tracking.js within a folder/sub folders. FOR /R "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2\" %%I IN (tracking.js*) DO COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\tracking.js" %%~fI When this is run I get the following syntax error C:COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\track ing.js" D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2 \SHAPERS_COMBINED\Smarter Communications\WhatisInfluencing\script\Tracking.js The syntax of the command is incorrect. Ideas please?

    Read the article

  • How to make a general profile for PHPUnit testing in WebIDE?

    - by Ondrej Slinták
    I'm playing a bit with beta version of PHP Storm (PHP version of WebIDE) and its integration of PHPUnit. I know how to set a profile to run tests in particular file, directory or class. Problem is, I'd like to create some profile where Run button would run tests in currently opened file. Any idea if there's a way to do it? Or perhaps it isn't implemented in beta version yet?

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • is (StringComparison.)Ordinal the same as InvariantCulture for testing equality?

    - by Tim Lovell-Smith
    From what I read so far, it sounds like these StringComparison types are meant to differ in how they do sorting of strings, if so, does that mean that it does'nt matter which StringComparison you use when doing an equality comparison? string.Equals(a, b, StringComparison....) Extra credit: does it make a difference to the answer if we compare OrdinalIgnoreCase and InvariantCultureIgnoreCase? What is the answer then? Please provide supporting argument and/or references.

    Read the article

  • iPhone app memory leak with UIImage animation? Problem testing on device

    - by user157733
    I have an animation which works fine in the simulator but crashes on the device. I am getting the following error... Program received signal: “0”. The Debugger has exited due to signal 10 (SIGBUS) A bit of investigating suggests that the UIImages are not getting released and I have a memory leak. I am new to this so can someone tell me if this is the likely cause? If you could also tell me how to solve it then that would be amazing. The images are 480px x 480px and about 25kb each. My code is below... NSArray *rainImages = [NSArray arrayWithObjects: [UIImage imageNamed:@"rain-loop0001.png"], [UIImage imageNamed:@"rain-loop0002.png"], [UIImage imageNamed:@"rain-loop0003.png"], [UIImage imageNamed:@"rain-loop0004.png"], [UIImage imageNamed:@"rain-loop0005.png"], [UIImage imageNamed:@"rain-loop0006.png"], //more looping images [UIImage imageNamed:@"rain-loop0045.png"], [UIImage imageNamed:@"rain-loop0046.png"], [UIImage imageNamed:@"rain-loop0047.png"], [UIImage imageNamed:@"rain-loop0048.png"], [UIImage imageNamed:@"rain-loop0049.png"], [UIImage imageNamed:@"rain-loop0050.png"], nil]; rainImage.animationImages = rainImages; rainImage.animationDuration = 4.15/2; rainImage.animationRepeatCount = 0; [rainImage startAnimating]; [rainImage release]; Thanks

    Read the article

  • Duck type testing with C# 4 for dynamic objects.

    - by Tracker1
    I'm wanting to have a simple duck typing example in C# using dynamic objects. It would seem to me, that a dynamic object should have HasValue/HasProperty/HasMethod methods with a single string parameter for the name of the value, property, or method you are looking for before trying to run against it. I'm trying to avoid try/catch blocks, and deeper reflection if possible. It just seems to be a common practice for duck typing in dynamic languages (JS, Ruby, Python etc.) that is to test for a property/method before trying to use it, then falling back to a default, or throwing a controlled exception. The example below is basically what I want to accomplish. If the methods described above don't exist, does anyone have premade extension methods for dynamic that will do this? Example: In JavaScript I can test for a method on an object fairly easily. //JavaScript function quack(duck) { if (duck && typeof duck.quack === "function") { return duck.quack(); } return null; //nothing to return, not a duck } How would I do the same in C#? //C# 4 dynamic Quack(dynamic duck) { //how do I test that the duck is not null, //and has a quack method? //if it doesn't quack, return null }

    Read the article

  • What's the best way to do cross browser testing?

    - by Doug
    What's the best way for me to check if my website is compatible in IE7,8, Safari, FF, and Chrome without having to install each and everyone? I mainly want to check the CSS, HTML, and JavaScript. Update I put a bounty in hopes there is a more practical solution for someone like myself. I am using Windows 7 Home Premium x64. Update2 I don't mind installing these browsers now, but I can't even if I wanted to. Windows 7 doesn't allow me to install IE7.

    Read the article

  • What is a convenient base for a bignum library & primality testing algorithm?

    - by nn
    Hi, I am to program the Solovay-Strassen primality test presented in the original paper on RSA. Additionally I will need to write a small bignum library, and so when searching for a convenient representation for bignum I came across this [specification][1]: struct { int sign; int size; int *tab; } bignum; I will also be writing a multiplication routine using the Karatsuba method. So, for my question: What base would be convenient to store integer data in the bignum struct? Note: I am not allowed to use third party or built-in implementations for bignum such as GMP. Thank you.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >