Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 307/751 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • What is an effective git process for managing our central code library?

    - by Mathew Byrne
    Quick background: we're a small web agency (3-6 developers at any one time) developing small to medium sized Symfony 1.4 sites. We've used git for a year now, but most of our developers have preferred Subversion and aren't used to a distributed model. For the past 6 months we've put a lot of development time into a central Symfony plugin that powers our custom CMS. This plugin includes a number of features, helpers, base classes etc. that we use to build custom functionality. This plugin is stored in git, but branches wildly as the plugin is used in various products and is pulled from/pushed to constantly. The repository is usually used as a submodule within a major project. The problems we're starting to see now are a large number of Merge conflicts and backwards incompatible changes brought into the repository by developers adding custom functionality in the context of their own project. I've read Vincent Driessen's excellent git branching model and successfully used it for projects in the past, but it doesn't seem to quite apply well to our particular situation; we have a number of projects concurrently using the same core plugin while developing new features for it. What we need is a strategy that provides the following: A methodology for developing major features within the code repository. A way of migrating those features into other projects. A way of versioning the core repository, and of tracking which version each major project uses. A plan for migrating bug fixes back to older versions. A cleaner history that's easier to see where changes have come from. Any suggestions or discussion would be greatly appreciated.

    Read the article

  • Server 2012 GPO: PowerShell Script on Computer Startup not running

    - by Alex
    I've got a couple of Server 2012 instances on Amazon EC2 and I'm in the process of setting up the GPOs. All of the settings of the GPOs are being applied fine, except none of the PowerShell scripts specified on computer startup are actually being executed. The scripts are sitting on a UNC share which has Authenticated Users applied to it with full permissions. I'm assuming it probably has something to do with the Execution Policy, but I'm not sure how to automatically bypass it. I could just go in each instance and bypass the Execution Policy, but that's obviously not a good idea, plus I'm eventually going to connect Windows 7 computers that will be running the same scripts. How can I get the scripts to actually run? Google searches hasn't yielded a whole lot...

    Read the article

  • Windows 7 Loading speed

    - by Sergiy Byelozyorov
    I have installed Windows 7 64-bit about two week ago. Since then everything was working alright, but yesterday Windows started to load much slower. Main delay comes while Windows logo is glowing -- it used to be around 10-15 seconds and now it's around 2 minutes. As far as I remember, I have only installed Microsoft IntelliMouse driver recently and removed it today. Unfortunately I found out that I have switched off System Restore some time long ago, so can't even test if the problem can be fixed by restoring old version. How can I troubleshoot the problem? Can I somehow find out what's causing the delay? P.S. Please don't suggest to reinstall Windows. I know that will help, but reinstalling all software and settings is a pain and requires a lot of time.

    Read the article

  • windows server 2008 vs ubuntu 11 [closed]

    - by user472875
    I am working on implementing a custom server application that should be capable of handling a very large volume of traffic. I am aware that this type of question has been asked a lot, but I haven't been able to find a good answer. What I'm really looking for is for a server with given specs which OS will be able to handle a larger traffic faster and more reliably. I do not care about rights management or any other features. I am fairly good with both platforms, and so I would like to pick the OS with better performance on a clean install, and with nothing else running. Thanks in advance.

    Read the article

  • Short Look at Frends Helium 2.0 Beta

    - by mipsen
    Pekka from Frends gave me the opportunity to have a look at the beta-version of their Helium 2.0. For all of you, who don't know the tool: Helium is a web-application that collects management-data from BizTalk which you usually have to tediously collect yourself, like performance-data (throttling, throughput (like completed Orchestrations/hour), other perfomance-counters) and data about the state of BTS-Applications and presents the data in clearly structured diagrams and overviews which (often) even allow drill-down.  Installing Helium 2 was quite easy. It comes as an msi-file which creates the web-application on IIS. Aditionally a windows-service is deployt which acts as an agent for sending alert-e-mails and collecting data. What I missed during installation was a link to the created web-app at the end, but the link can be found under Program Files/Frends... On the start-page Helium shows two sections: An overview about the BTS-Apps (Running?, suspended messages?) Basic perfomance-data You can drill-down into the BTS-Apps further, to see ReceiveLocations, Orchestrations and SendPorts. And then a very nice feature can be activated: You can set a monitor to each of the ports and/or orchestrations and have an e-mail sent when a threshold of executions/day or hour is not met. I think this is a great idea. The following screeshot shows the configuration of this option. Conclusion: Helium is a useful monitoring  tool for BTS-operations that might save a lot of time for collecting data, writing a tool yourself or documentation for the operations-staff where to find the data. Pros: Simple installation Most important data for BTS-operations in one place Monitor for alerts, if throughput is not met Nice Web-UI Reasonable price Cons: Additional Perormance-counters cannot be added Im am not sure when the final version is to be shipped, but you can see that on Frend's homepage soon, I guess... A trial version is available here

    Read the article

  • Can't reinstall VLC

    - by David matthews
    I use VLC a lot. And when 2.0 came out Ubuntu did not update to that version, the REPO had the older version even months later, So I added the daily repo: http://ppa.launchpad.net/videolan/stable-daily/ubuntu and that worked for a while, after a few months later I received a 'Distribution upgrade' and when I installed it, it removed VLC. when I tried to re-install it gave me a bunch of unmet dependency's, so I disabled the source, ran apt-get update, and tried to install the older VLC, that did not work either. I eventually found a web page, and it helped me get it working, and I was also able to get the 'Stable Daily' working too But last night, I got another 'distro upgrade' and it uninstalled VLC again. when I try to reinstall from daily I get: The following packages have unmet dependencies: vlc : Depends: fonts-freefont-ttf but it is not installable Depends: vlc-nox (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Recommends: vlc-plugin-pulse (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. and from the default source: vlc : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed vlc-plugin-pulse : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Any ideas? I am using ubuntu 12.04 64bit.

    Read the article

  • Database Schema Usage

    - by CrazyHorse
    I have a question regarding the appropriate use of SQL Server database schemas and was hoping that some database gurus might be able to offer some guidance around best practice. Just to give a bit of background, my team has recently shrunk to 2 people and we have just been merged with another 6 person team. My team had set up a SQL Server environment running off a desktop backing up to another desktop (and nightly to the network), whilst the new team has a formal SQL Server environment, running on a dedicated server, with backups and maintenance all handled by a dedicated team. So far it's good news for my team. Now to the query. My team designed all our tables to belong to a 3-letter schema name (e.g. User = USR, General = GEN, Account = ACC) which broadly speaking relate to specific applications, although there is a lot of overlap. My new team has come from an Access background and have implemented their tables within dbo with a 3-letter perfix followed by "_tbl" so the examples above would be dbo.USR_tblTableName, dbo.GEN_tblTableName and dbo.ACC_tblTableName. Further to this, neither my old team nor my new team has gone live with their SQL Servers yet (we're both coincidentally migrating away from Access environments) and the new team have said they're willing to consider adopting our approach if we can explain how this would be beneficial. We are not anticipating handling table updates at schema level, as we will be using application-level logins. Also, with regards to the unwieldiness of the 7-character prefix, I'm not overly concerned myself as we're using LINQ almost exclusively so the tables can simply be renamed in the DMBL (although I know that presents some challenges when we update the DBML). So therefore, given that both teams need to be aligned with one another, can anyone offer any convincing arguments either way?

    Read the article

  • CSS help positioning divs inline

    - by JaPerk14
    I need help with a recurring problem that happens a lot. I want to create a header that consists of 3 sections which are positioned inline. I display them inline using the following css code: display: inline & float: leftThe problem is that when I resize my browser window the last div is pushed down and isn't displayed inline. I know it sounds like I'm being picky, but I don't want the design to distort as the visitor change's the monitor screen. I have provided the html and css code below that I am working with below. Hopefully I have explained this well enough. Thanks in advance. HTML <div class="masthead-wrapper"> &nbsp; </div> <div class="searchbar-wrapper"> &nbsp; </div> <div class="profile-menu-wrapper"> &nbsp; </div> CSS #Header { display: block; width: 100%; height: 80px; background: #C0C0C0; } .masthead-wrapper { display: inline; float: left; width: 200px; height: 80px; background: #3b5998; } .searchbar-wrapper { display: inline; float: left; width: 560px; height: 80px; background: #FF0000; } .profile-menu-wrapper { display: inline; float: left; width: 200px; height: 80px; background: #00FF00; }

    Read the article

  • Business Analyst role in development process

    - by Ryan
    I work as a business analyst and I currently oversee much of the development efforts of an internal project. I'm responsible for the requirements, specs, and overall testing. I work closely with the developers (onshore and offshore). The offshore team produces all of the reports. Version 1.0 had a 9 month development cycle and I had about 4-5 months to test all the reports. There was the usual back and forth to get the implementation right. Version 2.0 had a much shorter development cycle (3 months). I received the first version of the reports about 3 weeks ago and noticed a lot of things wrong with it. Many of the requirements were wrong and the performance of the queries was horrendous at 5x - 6x longer than it should have been. The onshore lead developer was out and did not supervise the offshore development team in generating the reports. Without consulting management, I took a look at the SQL in the reports and was able to improve performance greatly (by a factor of 6x) which is acceptable for this version. I sent the updated queries as guidelines to the offshore team and told them they should look at doing X instead of Y to improve performance and also to fix some specific logic issues. I then spoke to my managers about this because it doesn't feel right that I was developing SQL queries, but given our time crunch I saw no other way. We were able to fix the issue quite fast which I'm happy with. Current situation: the onshore managers aren't too pleased that the offshore team did not code for performance. I know there are some things I could have done better throughout this process and I do not in any way consider myself a programmer. My question is, if an offshore team that works apart from the onshore project resources fails to deliver an acceptable release, is it appropriate to clean up their work to meet a deadline? What kind of problems could this create in the future?

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Almost 2013 - Any decent options for mp3 to text? (Speech Recognition)

    - by ajacian81
    I know there's some questions here on s/u regarding converting spoken word mp3 to text, however, most are pretty old (2010 and earlier). I'm just wondering if there's any new legitimate options for this task - if google has shown us anything, speech recognition has come a long way. Personally, I'd prefer a linux based solution, but I'm not picky. I've heard a lot about something called Sphinx, but I tried to set it up and get it going but I couldn't. I know there's a number of different componenents for Sphinx so maybe I was doing it wrong? Either way, are there any new applications for Speech recognition, especially from MP3 files? Thanks!

    Read the article

  • How to extend Wi-Fi signal across rooms?

    - by Sriram Krishnan
    I moved into a new place and my old Wi-Fi router just doesn't have the range to get into all the rooms. I've been investigating a lot of options and I'm wondering what other folks have done here. Moving the current location of my primary Wi-Fi router is not an option thanks to our cable provider and our landlord. Buy a bigger, beefier router (seems expensive). If so, should I go for one of those draft 802.11n ones to avoid microwave/other Wi-Fi router interference? Set up a router with DD-WRT as a repeater Leech the neighbors' open Wi-Fi access point. Alright, I was kidding about the last one but I'm genuinely curious as to what my best option is.

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

  • How to generate a round-numbers graph in Excel?

    - by tcheregati
    folks! Now, I have an Excel file with measurements I made of some color patches (I work at a Press company), with a device called spectrophotometer. Here it is: https://docs.google.com/open?id=0B0i8fdSf2ihzRlFYNWd4anItenM Density and Hue are two characteristics of each color patch. The thing is: I'm looking at a non-linear increase between the 25 Color Density measurements I took, but I NEED to know exactly how the color's Hue changes as the color's Density increases. For that, I needed Excel to give me round numbers for the X axis (for example 0,70 to 1,50 in 0,05 increments). And for that, obviously, I needed Excel to calculate the probable Hue Values corresponding to those ghost/round/not-given values of Density (like a kind of advanced rule of three). So, can anyone help me on that? Thanks a lot!

    Read the article

  • Need game development sandbox like Etoys to do 2D games prototyping

    - by Dimitry Tato
    I am new to game development, and currently working on development a mobile 2D game (for android). As the part of the development process, I need to build a prototype and playtest it, to see if the game mechanics and user interaction is ok For example: if I have a starship shooting at ememies, I need to see what's the best size for my starship. what trajectories should the enemy ships fly and what velocity. Should the enemy ships be coming only from left to right, or also from top Should the enemy ships form a 'flock' or just fly by themselves what's the best 'powerup' pickup mechanics: to shoot it, or to pick it with the ship etc Implementing these details directly in Java (Android) is time consuming and as many of the 'hypotheses' will be rejected, I also don't want to invest a lot of time to code thigs, majority of which gonna be rejected. I found 'tool' Etoys http://www.youtube.com/watch?v=34cWCnLC5nM&feature=related and official website http://www.squeakland.org/ which helps to build 'prototype' quickly, but Etoys is meant for children learning programming and is too basic. SO MY QUESTION IS: Is there any prototyping tool, as simple as Etoys and with better prototype quality?

    Read the article

  • C4C - 2012

    - by Timothy Wright
    C4C, in Kansas City, is always a fun event. At points it gets to be a pressure cooker as you zone in trying to crank out some fantastic code in just a few hours, but it is always fun. A great challenge of your skill as a software developer and for a good cause. This year my team helped The United Cerebral Palsy of Greater Kansas City organization to add online job applications and a database for tracking internal training. I keep finding that there is one key rule to pulling off a successful C4C weekend project, and that is “Keep It Simple”. Each time you want to add that one cool little feature you have to ask yourself.. Is it really necessary? and Do I have time for that? And if you are going to learn something new you should ask yourself if you’re really going to be able to learn that AND finish the project in the given time. Sometimes the less elegant code is the better code if it works. That said… You get a great amount of freedom to build the solution the way you want. Typically, the software we build for the charities will save them a lot of money and time and make their jobs easier. You are able to build the software you know you are capable of creating from your own ideas. I highly recommend any developers in the area to signup next year and show off your skills. I know I will!

    Read the article

  • Configure custom SSL certificate for RDP on Windows Server 2012 in Remote Administration mode?

    - by Ryan Bolger
    So the release of Windows Server 2012 has removed a lot of the old Remote Desktop related configuration utilities. In particular, there is no more Remote Desktop Session Host Configuration utility that gave you access to the RDP-Tcp properties dialog that let you configure a custom certificate for the RDSH to use. In its place is a nice new consolidated GUI that is part of the overall "edit deployment properties" workflow in the new Server Manager. The catch is that you only get access to that workflow if you have the Remote Desktop Services role installed (as far as I can tell). This seems like a bit of an oversight on Microsoft's part. How can we configure a custom SSL certificate for RDP on Windows Server 2012 when it's running in the default Remote Administration mode without needlessly installing the Remote Desktop Services role?

    Read the article

  • Shared SQL Server 2008

    - by nazaf
    Hi, I have a Windows Hyper VPS plan with 1024 MB of RAM. After installing SQL Server 2008 Express, my memory usage went up to 75% without running my site yet. I know that SQL Server consumes a lot of memory, so I decided to host my DB on a shared server. Which of the following is more scalable: install my DB on my VPS, or on a shared server ? If the latter, then can you recommend me a good shared server? Thanks.

    Read the article

  • How to avoid tilde ~ in Bash prompt?

    - by Jirka
    Hello! I have set my prompt in bash in a such way that I can use it directly in scp command: My current PS1 string: PS1="\h:\w\n$" And the prompt looks like this: lnx-hladky:/tmp/plugtmp $ What I don't like at all is the fact that $HOME directory is displayed as tilde. Can this be avoided? It's causing problems when switching between different users. Example: lnx-hladky:~/DOC $ Documentation says: \w : the current working directory, with $HOME abbreviated with a tilde \W: the basename of the current working directory, with $HOME abbreviated with a tilde Is there any possibility to avoid $HOME being abbreviated with a tilde? I have found one way around but I feel like it's overcomplicated: PROMPT_COMMAND='echo -ne "\e[4;35m$(date +%T)\e[24m$(whoami)@$(hostname):$(pwd)\e[m\n"' PS1=$ Can anyone propose a better solution? I have a feeling it's not quite OK to run so many commands just to get prompt. (date,whoami,hostname,pwd). Thanks a lot! Jirka

    Read the article

  • Friday Fun: Doom Triple Pack

    - by Mysticgeek
    Thankfully it was only a 4 day work week, but that is enough to get sick of the TPS reports. Today we go retro and experience three classic first-person PC shooter games with the Doom Triple Pack. Doom Triple Pack The Doom Triple Pack brings you your favorite classic first-person PC shooter games in Flash format. The games include Doom, Heretic, and Hexen…just select which one you want to play. Click on Controls to learn how to navigate your characters through the games.   Each on has in-game options you can use to control the style of play. The ever famous DOOM…each game runs smoothly for what they are provided you have a decent internet connection. If you’re tired of spreadsheets and meetings and want to live some of you favorite retro PC gaming days, the Doom Triple Pack can be a lot of fun. If you’re looking for other fun ways to waste time at the office check out the games in the How-To Geek Arcade. Play the Doom Triple Pack Similar Articles Productive Geek Tips Transform your XP Computer to a Modern LookSupport for Some Versions of Windows is EndingSet Automatic Defrag Options for All Drives in Vista Service Pack 1Friday Fun: Portal, the Flash VersionHow to Play .OGM Video Files in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Top 10 Posts in 2010

    - by dwahlin
    Blogging’s a lot of fun and a great way to share what you’ve learned. It’s also a great way to learn based upon comments people leave that help you see things in an entirely new way in some cases.  Since we’ve now moved on to 2011 (Happy New Year’s!) I wanted to list the Top 10 posts from my blog during 2010 based on individual views.  Thanks to everyone who follows my blog and adds comments from time to time. Here’s wishing everyone a great 2011!   1. Reducing Code by Using jQuery Templates 2. Integrating HTML into Silverlight Applications 3. Silverlight is Dead, the Moon is Made of Cheese, and HTML 5 is Ready for Prime Time 4. Understanding the Role of Commanding in Silverlight 4 Applications 5. New Article – Getting Started with WCF RIA Services 6. Simplify Your Code with LINQ 7. My Favorite iPad Apps….So Far 8. Final Release of Silverlight Tools for Visual Studio 2010 Released 9. Handling WCF Service Paths in Silverlight 4 – Relative Path Support 10. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application   Getting Started with the MVVM Pattern in Silverlight Applications – Posted late 2009 so I’m giving it honorable mention status since it’s still one of the most popular posts.

    Read the article

  • When are Getters and Setters Justified

    - by Winston Ewert
    Getters and setters are often criticized as being not proper OO. On the other hand most OO code I've seen has extensive getters and setters. When are getters and setters justified? Do you try to avoid using them? Are they overused in general? If your favorite language has properties (mine does) then such things are also considered getters and setters for this question. They are same thing from an OO methodology perspective. They just have nicer syntax. Sources for Getter/Setter Criticism (some taken from comments to give them better visibility): http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html http://typicalprogrammer.com/?p=23 http://c2.com/cgi/wiki?AccessorsAreEvil http://www.darronschall.com/weblog/2005/03/no-brain-getter-and-setters.cfm http://www.adam-bien.com/roller/abien/entry/encapsulation_violation_with_getters_and To state the criticism simply: Getters and Setters allow you to manipulate the internal state of objects from outside of the object. This violates encapsulation. Only the object itself should care about its internal state. And an example Procedural version of code. struct Fridge { int cheese; } void go_shopping(Fridge fridge) { fridge.cheese += 5; } Mutator version of code: class Fridge { int cheese; void set_cheese(int _cheese) { cheese = _cheese; } int get_cheese() { return cheese; } } void go_shopping(Fridge fridge) { fridge.set_cheese(fridge.get_cheese() + 5); } The getters and setters made the code much more complicated without affording proper encapsulation. Because the internal state is accessible to other objects we don't gain a whole lot by adding these getters and setters. The question has been previously discussed on Stack Overflow: http://stackoverflow.com/questions/565095/java-are-getters-and-setters-evil http://stackoverflow.com/questions/996179

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >