Search Results

Search found 13930 results on 558 pages for 'dev center'.

Page 213/558 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Communications: BNSL Unifies The Customer Experience

    - by Michael Seback
    Hear how BNSL achieved a unified customer experience across channels.  BNSL is India's number one telecommunications operator with 70M mobile customers and 20M wired customers. They consolidated 330 different districts and customer experiences into a single customer experience across the contact center, web, email and SMS.  Click here to listen to their journey.  Read more about Oracle Communications.  

    Read the article

  • No gnome shell after install on ubuntu 12.04 inside VMWare workstation

    - by Wouter
    I created a virtual machine for ubuntu 12.04. I then installed GNOME3 from the software center, How do I install and use the latest version of GNOME Shell?. However after install I don't get the new gnome shell, in the terminal gnome-shell --version tells me that I'm using the latest version, 3.4.1. My interface now looks like this: Does anybody have an idea why I do not get the latest gnome shell? @Histo yes but why I don't have this interface?

    Read the article

  • Render an image with layers for shadows /reflections, object and ground in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on the ground in the center. This object has shadows and reflections on the ground. How can I render an image containing 3 separate layers for The object The ground The reflection / shadow on the ground Which format do I use for this? (It should include all 3 layers + I should be able to enable/disable them in Photoshop) How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

  • How to enable multiple displays with Catalyst drivers in Ubuntu 13.04?

    - by Lokitez
    First, I installed Ubuntu 13.04. I have an ATI Radeon HD 7850. The open source drivers allowed multiple displays, but were horrendously laggy (even opening a browser window took several seconds). When I installed the Catalyst proprietary drivers, performance was perfect. The only problem is that trying to enable dual-monitors in the Catalyst center was grayed out and in the Ubuntu settings resulted in the resolution error. Is there any way around this?

    Read the article

  • You're invited : Oracle Solaris Forum, June 19th, Petah Tikva

    - by Frederic Pariente
    The local ISV Engineering will be attending and speaking at the Oracle and ilOUG Solaris Forum next week in Israel. Come meet us there! This free event requires registration, thanks. YOU'RE INVITED Oracle Solaris Forum Date : Tuesday, June 19th, 2012 Time : 14:00 Location :  Dan Academic CenterPetach TikvaIsrael Agenda : Enterprise Manager OPS Center and Oracle Exalogic Elastic CloudSolaris 11NetworkingCustomer Case Study : BMCOpen Systems Curriculum See you there!

    Read the article

  • Oracle OpenWorld 2012 - What's New

    - by Cinzia Mascanzoni
    Oracle OpenWorld 2012 is now over. Here a summary of major announcements on Hardware and Technology. Oracle Unveils Expanded Oracle Cloud Services Portfolio Oracle Unveils New Partner Cloud Programs Oracle Announces Latest Release of Oracle Exalogic Elastic Cloud Oracle Announces Oracle Exadata X3 Database In-Memory Machine Oracle Outlines Opportunity to Transform Industries from Device to Data Center with Embedded Java Oracle Announces Oracle Solaris 11.1 Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business New Release of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere

    Read the article

  • What's the best way to install the GD graphics library for Nagios?

    - by user1196
    While trying to install Nagios 3.2.3, I ran their ./configure script and got these errors: checking for main in -liconv... no checking for gdImagePng in -lgd (order 1)... no checking for gdImagePng in -lgd (order 2)... no checking for gdImagePng in -lgd (order 3)... no checking for gdImagePng in -lgd (order 4)... no *** GD, PNG, and/or JPEG libraries could not be located... ********* Boutell's GD library is required to compile the statusmap, trends and histogram CGIs. Get it from http://www.boutell.com/gd/, compile it, and use the --with-gd-lib and --with-gd-inc arguments to specify the locations of the GD library and include files. NOTE: In addition to the gd-devel library, you'll also need to make sure you have the png-devel and jpeg-devel libraries installed on your system. NOTE: After you install the necessary libraries on your system: 1. Make sure /etc/ld.so.conf has an entry for the directory in which the GD, PNG, and JPEG libraries are installed. 2. Run 'ldconfig' to update the run-time linker options. 3. Run 'make clean' in the Nagios distribution to clean out any old references to your previous compile. 4. Rerun the configure script. NOTE: If you can't get the configure script to recognize the GD libs on your system, get over it and move on to other things. The CGIs that use the GD libs are just a small part of the entire Nagios package. Get everything else working first and then revisit the problem. Make sure to check the nagios-users mailing list archives for possible solutions to GD library problems when you resume your troubleshooting. ******************************************************************** Which package do I want? libgd2-xpm-dev? libgd2-noxpm-dev? php5-gd? I'm not looking to do any image processing myself - I just want to get Nagios working.

    Read the article

  • Important Features Of The Brother MFC 4300 Printer

    The Brother MFC 4300 printer is a 4-in-1 media center than can handle all an office will ever need. This unit was designed with the small business in mind in the respect of one piece of equipment tha... [Author: Ben Pate - Computers and Internet - March 25, 2010]

    Read the article

  • Getting paid through Ltd or Umbrella company?

    - by Guy
    I am working for a company as a web dev consultant at the moment, and they asked me whether I want to get payed through the Umbrella company or through my Ltd. Which is better for me and why? The @David Thornley made a good point in comments. Don't forget that we are talking about web developing here. I am not sure how is it in UK, but in the country I am from, you get taxed differently for the stuff you do.

    Read the article

  • Oracle Enterprise Manager 12c webcast-ok

    - by lsarecz
    Az elmúlt hetekben sajnos kicsit hanyagoltam a blog írást. Igyekszem újraindítani, csakúgy mint kollégám. Elso bejegyzésként szeretném felhívni az üzemeltetés iránt érdeklodok figyelmét, hogy a héten több Oracle Enterprise Manager 12c webcast lesz, melyekre elozetes regisztráció szükséges az alábbi linkeken: Oracle Enterprise Manager 12c Automated Agent Deployment November 29. 17 óra Perform a Zero Downtime Upgrade to Oracle Enterprise Manager 12cNovember 30. 17 óra Oracle Enterprise Manager Ops Center: Global Systems Management Made EasyDecember 1. 19 óra

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • [News] Les 60 outils .NET que vous devriez conna?tre

    Le site de blog webdistortion.com publie une liste de 60 outils que tout d?velopper .NET doit conna?tre : "Every good developer knows never to re-invent the wheel, especially if there is software out there that has been tested by others, and has an established track record. As a developer using the .NET framework I?ve found some of these libraries invaluable, so I?m sharing them for some of the other dev?s out there with a brief outline of how to use."

    Read the article

  • Unable to install Hadoop in Ubuntu 12.04

    - by Anitha
    I am trying to install hadoop in ubuntu 12.04 version. Following the instructions from michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/. Installed java-6-openjdk from ubuntu software center. I have set java_home in .bashrc.Also set java_home in hadoop conf/ env.sh. While formatting the namenode, getting error: usr/lib/jvm/java-6-openjdk/bin/java no such file or directory.

    Read the article

  • GLP for Pillar Axiom 600 Storage System Implementation Specialist

    - by uwes
    Now availabe at OPN Competency Center. The guided learning path provides you with an overview of the Pillar Axiom 600 storage system, and the technical details that you need to become a Pillar Axiom 600 Storage System Certified Implementation Specialist.  Learn more, go to: Pillar Axiom 600 Storage System Implementation Specialist.

    Read the article

  • Why do I get Unity instead of Classic when using NX?

    - by Mathew
    Recently I installed FreeNX on my PC and when I login with my 'dev' account I get the Unity interface rather than Classic Gnome. This is odd as my last login before FreeNX was with the Classic interface. I would like to have Classic over FreeNX by default. I do login with a 'watch iplayer' account where the Unity interface works a treat. For this reason I would prefer not to uninstall Unity. Any ideas?

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Are there any drawbacks to the Major.Minor.YMDD.Build version strategy?

    - by Chu
    I'm trying to come up with a good version strategy to fit our specific needs. We've proposed settling on this and I wanted to ask the question to see if anyone's experience would suggest avoiding this or altering it in any way. Here's our proposal: Versions are released in this format: MAJOR.MINOR.YMDD.BN. Here it is broken out: MAJOR & MINOR are typical; we'll increase MINOR when we feel code and new feature sets warrants it; once every few months most likely. MAJOR will increase ~yearly. YMDD: Y will be the last digit of the current year, so "1" for 2011, "2" for 2012, etc. A non-padded month will be used to keep the number smaller (9 instead of 09 for example). DD of course is the day, padded with a zero for days under 10. BN: BN is the build number and increases by one anytime we make a change to a branch of the code represented by the build, for example: If were to make a build today, our release would be version 5.0.1707.1. I release to QA today and 3 days from now QA finds that a change broke the save functionality on a page. Instead of me changing our current development code, I'd go back to the code that I used to create version 5.0.1707.1, make the fix there, then increase the BN portion of the version and would then re-release 5.0.1707.2 back to QA. In short, anytime a change is made to a branched version that isn't the active dev branch, we'd use the original version number and increase only the BN portion (even if the change happened 3 days, 3 weeks or 3 months from the initial release of that version). Anytime we make a new release from our Active dev branch, we'd come up with a new version based on the M/D of the release using the outlined strategy. We do this once every 2-3 weeks. Are there holes or pitfalls with this? If so, what are they? Thanks EDIT To clarify one point that I didn't get out very well - Oct/Nov/Dec will be two digits, it's only the year that won't be. So 9 for Sept, 10 for Oct, 11 for Nov, etc.

    Read the article

  • Hard Drive Formatting and Unmounting Errors

    - by Lucas Carther
    I'm trying to format a hard drive using Ubuntu 11.10. Every time that I attempt to format it from the Disk Utility, I get the error saying the device is busy, and under details it says: /dev/sda1 is mounted Then when I try to unmount it from the disk utility, it shows that the operation has failed and says: Error unmounting: umount exited with exit code 1: helper failed with: umount: only root can unmount UUID=70D6-1701 from /boot/efi

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >