Search Results

Search found 4050 results on 162 pages for 'requirements'.

Page 16/162 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

  • Geek City: Preparing for the SQL Server Master Exam

    - by Kalen Delaney
    I was amazed at the results when I just did a search of SQLBlog, and realized no one had really blogged here about the changes to the Microsoft Certified Master (MCM) program. Greg Low described the MCM program when he decided to pursue the MCM at the end of 2008, but two years later, at the end of 2010, Microsoft completely changed the requirements. Microsoft published the new requirements here . The three week intensive course is no longer required, but that doesn't mean you can just buy an exam...(read more)

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • Error installing RVM

    - by Dbugger
    I am following this guide, but this is the output I receive. What am the problem? dbugger@mercury:~$ \curl -sSL https://get.rvm.io | bash -s stable --rails Downloading https://github.com/wayneeseguin/rvm/archive/stable.tar.gz Upgrading the RVM installation in /home/dbugger/.rvm/ RVM PATH line found in /home/dbugger/.profile /home/dbugger/.bashrc /home/dbugger/.zshrc. RVM sourcing line found in /home/dbugger/.bash_profile /home/dbugger/.zlogin. Upgrade of RVM in /home/dbugger/.rvm/ is complete. # Enrique, # # Thank you for using RVM! # We sincerely hope that RVM helps to make your life easier and more enjoyable!!! # # ~Wayne, Michal & team. In case of problems: http://rvm.io/help and https://twitter.com/rvm_io Upgrade Notes: * No new notes to display. rvm 1.25.27 (stable) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/] Searching for binary rubies, this might take some time. No binary rubies available for: ubuntu/14.04/x86_64/ruby-2.1.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for ubuntu. Installing requirements for ubuntu. Updating system.......... Installing required packages: gawk, libreadline6-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3.... Error running 'requirements_debian_libs_install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3', showing last 15 lines of /home/dbugger/.rvm/log/1401804140_ruby-2.1.2/package_install_gawk_libreadline6-dev_libssl-dev_libyaml-dev_libsqlite3-dev_sqlite3.log ++ /scripts/functions/utility : __rvm_try_sudo() 405 > sudo -p '%p password required for '\''apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3'\'': ' apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libssl-dev : Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2) but 1.0.1f-1ubuntu2.1 is to be installed E: Unable to correct problems, you have held broken packages. ++ /scripts/functions/utility : __rvm_try_sudo() 405 > return 100 ++ /scripts/functions/requirements/ubuntu : requirements_debian_libs_install() 36 > return 100 Requirements installation failed with status: 100.

    Read the article

  • Simple CMS for clients?

    - by jae
    I'm looking for something fairly simple for clients to use and very easy for me to skin (somewhat complicated themes). something like Perch or Couchcms, but has to be open source to satisfy some license requirements.. I can't really seem to find any CMS that is like this {Wordpress, drupal etc are way too heavy for this use} , really appreciated There aren't much requirements, anything will really work - each site gets their own jailed instance and stuff, just PHP/MySQL

    Read the article

  • Will SRS be sufficient enough for the programmers to do their work, without the additional overhead of FS?

    - by SixSickSix
    We always make 2 documents the SRS (Software Requirement Specification) and the FS (Functional Specifications) documents for the coders aka programmers. As I have examined the SRS is more like containing both functional and non-functional requirements as compared to the FS that deals only with the functional requirements. To cut it short will the SRS be sufficient enough for the programmers to do their work? and not make any FS anymore?

    Read the article

  • How to charge in agile iterative approach?

    - by user1620696
    I have a doubt about budgeting when working with agile iterative approach. If I understood well, in agile at the end of each iteration we have usable product, so we have some of the requirements met and then some part of the software will be already working. How do we charge for our work in this methodology? Do we charge per iteration, i.e. charge per major requirements being met, or just charge the customer when the software is indeed finished and then receive for everything at once?

    Read the article

  • Free Hosting With these specs [closed]

    - by blunders
    Possible Duplicate: How to find web hosting that meets my requirements? Looking for to make a quick site and have the following requirements: It's free No ADs, frames, etc Support for directory/site password Allow web-based or FTP file transfer Allows 100 MB to 1 GB of storage custom short URL or sub-domain Realize this might be very narrow, but figured I'd ask just in case there was a good solution.

    Read the article

  • Linking application build number to svn revision

    - by ahenderson
    I am looking for a strategy to version an application with the following requirements. My requirements are given an exe with version number (major.minor.build-number) 1) I want to map the version to a svn source revision that made the exe 2) With the source and exe I should be able to attach and debug in vs2010 with no issue. 3) Once I check-out the source code for the exe I should be able to build the exe again with the version number without having to make any changes to a file.

    Read the article

  • How to find a good photo gallery for my website?

    - by Roflcoptr
    For my website I'm searching for a really simple gallery module that looks like the one use by Dropbox. But I'd like to have 2 additional features: allow visitors to make comments and display the number of hits of a photo. I was googling a lot for such gallerys, but could find anyone that really matched my requirements. Could someone reocommend a simple good-looking gallery that fullfills these requirements.

    Read the article

  • Software RS vs. FS

    - by SixSickSix
    We always make 2 documents the SRS (Software Requirement Specification) and the FS (Functional Specifications) documents for the coders aka programmers. As I have examined the SRS is more like containing both functional and non-functional requirements as compared to the FS that deals only with the functional requirements. To cut it short will the SRS be sufficient enough for the programmers to do their work? and not make any FS anymore?

    Read the article

  • Is it possible to get free web host for my registered domain? [closed]

    - by Ahmed Alsayadi
    Possible Duplicate: How to find web hosting that meets my requirements? I searched online for many free web hosting websites like NetFirms, most of them asking you to register for their sub-domain or to buy a new domain, but I already have one which I bought through GoDaddy. Now I hope to find a free web host for my website (site's size less than 20 MB). Any idea which web hosting can meet such requirements?

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Take Care When Choosing an SEO Company

    There are so many different kinds of SEO companies and as every company has individual requirements that you will need to find a company that works best for you and can accommodate your requirements. Unfortunately, many people discover the hard way that they have been hoodwinked by SEO firms who do not deliver what they promise. This often leaves them mistrustful of any company offering SEO services.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Will Dolphin emulator run (smoothly) on this machine?

    - by Mark
    Product Specs: Intel® NM10 Express chipset /w Intel® Atom® D525 (dual-core, 1.8 GHz) I think the most I can put in there for RAM is 4 GB. (They don't make 8 GB SODIMM do they?) The Dolphin site is pretty vague about requirements: Windows XP or higher, or Linux, or MacOSX Intel Fast CPU with SSE2. GPU with Pixel Shader 2.0 or greater. Some integrated graphics chips work but it depends on the model (and only with DirectX 9). I'm trying to make a light-weight quiet little machine for streaming video and playing eumulators, and I'm trying to figure out the minimum requirements I will need to do what I want. I know I'm not supposed to ask for product recommendations here, so if you could just advise the minimum requirements in terms of CPU, graphics card, and RAM, that'd be helpful.

    Read the article

  • Distributed, Parallel, Fault-tolerant File System

    - by Eddified
    There are so many choices that it's hard to know where to start. My requirements are these: Runs on Linux Most of the files will be between 5-9 MB in size. There will also be a significant number of small-ish jpgs (100px x 100px). All of the files need to be available over http. Redundancy -- ideally it would provide the space efficiency similar to RAID 5 of 75% (in RAID 5 this would be calculated thus: with 4 identical disks, 25% of the space is used for parity = 75% efficent) Must support several petabytes of data scalable runs on commodity hardware In addition, I look for these qualities, though they are not "requirements": Stable, mature file system Lots of momentum and support etc I would like some input as to which file system works best for the given requirements. Some people at my organization are leaning towards MogileFS, but I'm not convinced of the stability and momentum of that project. GlusterFS and Lustre, based on my limited research, appear to be better supported... Thoughts?

    Read the article

  • Cannot add Windows 7 client to SBS 2008

    - by Sandokan
    I have just installed SBS 2008 R2 Standard on VMware Workstation 9 along with Windows 7 Pro N. Both are activated and running fine. I have followed the steps to configure SBS 2008 and am now at the point where I'm to add a computer to the domain. Here is where the problem begins. I have gone through the steps of using the webinterface. On the client I downloaded Launcher.exe. I then run it and get the error "Check computer requirements - Failed" (translated from swedish): "This computer doesn't reach the requirements for connecting to the network." "The computer doesn't reach the maximum requirements for the operating system with regards to connect to the network" The provided link for More information only leads to a general supportpage and doesn't handle this specific error. I have also checked the time settings and they are correct. Any clue as to what this problem could be?

    Read the article

  • VMWare hypervisor with only 1 network card?

    - by Rafiq Maniar
    VMWare hypervisor minimum requirements states that the minimum network requirements is: one NIC, plus one for Management interface (source: http://www.vmware.com/products/datacenter-virtualization/vsphere-hypervisor/requirements.html) It used to be possible to use 1 NIC only. Is anybody using the new versions of VMWare in this configuration? I ask because my colo provider will only provide me with 1 uplink (my server does have 2 NICs). I need to be able to run the VMs and also have remote management using only 1 NIC. Possible?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >