Search Results

Search found 30894 results on 1236 pages for 'best practice'.

Page 32/1236 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Avoiding coupling

    - by Seralize
    It is also true that a system may become so coupled, where each class is dependent on other classes that depend on other classes, that it is no longer possible to make a change in one place without having a ripple effect and having to make subsequent changes in many places.[1] This is why using an interface or an abstract class can be valuable in any object-oriented software project. Quote from Wikipedia Starting from scratch I'm starting from scratch with a project that I recently finished because I found the code to be too tightly coupled and hard to refactor, even when using MVC. I will be using MVC on my new project aswell but want to try and avoid the pitfalls this time, hopefully with your help. Project summary My issue is that I really wish to keep the Controller as clean as possible, but it seems like I can't do this. The basic idea of the program is that the user picks wordlists which is sent to the game engine. It will pick random words from the lists until there are none left. Problem at hand My main problem is that the game will have 'modes', and need to check the input in different ways through a method called checkWord(), but exactly where to put this and how to abstract it properly is a challenge to me. I'm new to design patterns, so not sure whether there exist any might fit my problem. My own attempt at abstraction Here is what I've gotten so far after hours of 'refactoring' the design plans, and I know it's long, but it's the best I could do to try and give you an overview (Note: As this is the sketch, anything is subject to change, all help and advice is very welcome. Also note the marked coupling points): Wordlist class Wordlist { // Basic CRUD etc. here! // Other sample methods: public function wordlistCount($user_id) {} // Returns count of how many wordlists a user has public function getAll($user_id) {} // Returns all wordlists of a user } Word class Word { // Basic CRUD etc. here! // Other sample methods: public function wordCount($wordlist_id) {} // Returns count of words in a wordlist public function getAll($wordlist_id) {} // Returns all words from a wordlist public function getWordInfo($word_id) {} // Returns information about a word } Wordpicker class Wordpicker { // The class needs to know which words and wordlists to exclude protected $_used_words = array(); protected $_used_wordlists = array(); // Wordlists to pick words from protected $_wordlists = array(); /* Public Methods */ public function setWordlists($wordlists = array()) {} public function setUsedWords($used_words = array()) {} public function setUsedWordlists($used_wordlists = array()) {} public function getRandomWord() {} // COUPLING POINT! Will most likely need to communicate with both the Wordlist and Word classes /* Protected Methods */ protected function _checkAvailableWordlists() {} // COUPLING POINT! Might need to check if wordlists are deleted etc. protected function _checkAvailableWords() {} // COUPLING POINT! Method needs to get all words in a wordlist from the Word class } Game class Game { protected $_session_id; // The ID of a game session which gets stored in the database along with game details protected $_game_info = array(); // Game instantiation public function __construct($user_id) { if (! $this->_session_id = $this->_gameExists($user_id)) { // New game } else { // Resume game } } // This is the method I tried to make flexible by using abstract classes etc. // Does it even belong in this class at all? public function checkWord($answer, $native_word, $translation) {} // This method checks the answer against the native word / translation word, depending on game mode public function getGameInfo() {} // Returns information about a game session, or creates it if it does not exist public function deleteSession($session_id) {} // Deletes a game session from the database // Methods dealing with game session information protected function _gameExists($user_id) {} protected function _getProgress($session_id) {} protected function _updateProgress($game_info = array()) {} } The Game /* CONTROLLER */ /* "Guess the word" page */ // User input $game_type = $_POST['game_type']; // Chosen with radio buttons etc. $wordlists = $_POST['wordlists']; // Chosen with checkboxes etc. // Starts a new game or resumes one from the database $game = new Game($_SESSION['user_id']); $game_info = $game->getGameInfo(); // Instantiates a new Wordpicker $wordpicker = new Wordpicker(); $wordpicker->setWordlists((isset($game_info['wordlists'])) ? $game_info['wordlists'] : $wordlists); $wordpicker->setUsedWordlists((isset($game_info['used_wordlists'])) ? $game_info['used_wordlists'] : NULL); $wordpicker->setUsedWords((isset($game_info['used_words'])) ? $game_info['used_words'] : NULL); // Fetches an available word if (! $word_id = $wordpicker->getRandomWord()) { // No more words left - game over! $game->deleteSession($game_info['id']); redirect(); } else { // Presents word details to the user $word = new Word(); $word_info = $word->getWordInfo($word_id); } The Bit to Finish /* CONTROLLER */ /* "Check the answer" page */ // ?????????????????? ( http://pastebin.com/cc6MtLTR ) Make sure you toggle the 'Layout Width' to the right for a better view. Thanks in advance. Questions To which extent should objects be loosely coupled? If object A needs info from object B, how is it supposed to get this without losing too much cohesion? As suggested in the comments, models should hold all business logic. However, as objects should be independent, where to glue them together? Should the model contain some sort of "index" or "client" area which connects the dots? Edit: So basically what I should do for a start is to make a new model which I can more easily call with oneliners such as $model->doAction(); // Lots of code in here which uses classes! How about the method for checking words? Should it be it's own object? I'm not sure where I should put it as it's pretty much part of the 'game'. But on another hand, I could just leave out the 'abstraction and OOPness' and make it a method of the 'client model' which will be encapsulated from the controller anyway. Very unsure about this.

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • Best practice ACLs to prepare for auditors?

    - by Nic
    An auditor will be visiting our office soon, and they will require read-only access to our data. I have already created a domain user account and placed them into a group called "Auditors". We have a single fileserver (Windows Server 2008) with about ten shared folders. All of the shares are set up to allow full access to authenticated users, and access restrictions are implemented with NTFS ACL's. Most folders allow full access to the "Domain Users" group, but the auditor won't need to make any changes. It takes several hours to update NTFS ACL's since we have about one million files. Here are the options that I am currently considering. Create a "staff" group to assign read/write instead of "Domain Users" at the share level Create a "staff" group to assign read/write instead of "Domain Users" at the NTFS level Deny access to the "Auditors" group at the share level Deny access to the "Auditors" group at the NTFS level Accept the status quo and trust the auditor. I will probably need to configure similar users in the future, as some of our contractors require a domain account but shouldn't be able to modify our client data. Is there a best practice for this?

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 1 of 2

    - by pinaldave
    This is first part of the two part series of Practices for Software Startup Pluralsight Course. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik, a leading vendor of developer and team productivity tools. Stephen is also a Certified Scrum Master, Certified Scrum Professional, PMP, and also speaks regularly at industry conferences around the world. He has written several books on application and database development.  Stephen is also a board member of the Scrum Alliance. Startups – Everybodies Dream Start-up companies are an important topic right now – everyone wants to start their own business.  It is also important to remember that all companies were a start up at one point – from your corner store to the giants like Microsoft and Apple.  Research proves that not every start-up succeeds, in fact, most will fail before their first year.  There are many reasons for this, and this could be due to the fact that there are many stages to a start-up company, and stumbling at any of these stages can lead to failure.  It is important to understand what makes a start-up company succeed at all its hurdles to become successful.  It is even important to define success.  For most start-ups this would mean becoming their own independently functioning company or to be bought out for a hefty profit by a larger company.  The idea of making a hefty profit by living your dream is extremely important, and you can even think of start-ups as the new craze.  That’s why studying them is so important – they are very popular, but things have changed a lot since their inception. Starting the Startups Beginning a start-up company used to be difficult, but now facilities and information is widely available, and it is much easier.  But that means it is much easier to fail, also.  Previously to start your own company, everything was planned and organized, resources were ensured and backed up before beginning; even the idea of starting your own business was a big thing.  Now anybody can do it, and the steps are simple and outlines everywhere – you can get online software and easily outsource , cloud source, or crowdsource a lot of your material.  But without the type of planning previously required, things can often go badly. New Products – New Ideas – New World There are so many fantastic new products, but they don’t reach success all the time.  I find start-up companies very interesting, and whenever I meet someone who is interested in the subject or already starting their own company, I always ask what they are doing, their plans, goals, market, etc.  I am sorry to say that in most cases, they cannot answer my questions.  It is true that many fantastic ideas fail because of bad decisions.  These bad decisions were not made intentionally, but people were simply unaware of what they should be doing.  This will always lead to failure.  But I am happy to say that all these issues can be gone because Pluralsight is now offering a course all about start-ups by Stephen Forte.  Stephen is a start up leader.  He has successfully started many companies and most are still going strong, or have gone on to even bigger and better things. Beginning Course on Startup I have always thought start-ups are a fascinating subject, and decided to take his course, but it is three hours long.  This would be hard to fit into my busy work day all at once, so I decided to do half of his course before my daughter wakes up, and the other half after she goes to sleep.  The course is divided into six modules, so this would be easy to do.  I began the first chapter early in the morning, at 5 am.  Stephen jumped right into the middle of the subject in the very first module – designing your business plan.  The first question you will have to answer to yourself, to others, and to investors is: What is your product and when will we be able to see it?  So a very important concept is a “minimal viable product.”  This means setting goals for yourself and your product.  We all have large dreams, but your minimal viable product doesn’t have to be your final vision at the very first.  For example: Apple is a giant company, but it is still evolving.  Steve Jobs didn’t envision the iPhone 6 at the very beginning.  He had to start at the first iPhone and do his market research, and the idea evolved into the technology you see now.  So for yourself, you should decide a beginning and stop point.  Do your market research.  Determine who you want to reach, what audience you want for your product.  You can have a great idea that simply will not work in the market, do need, bottlenecks, lack of resources, or competition.  There is a lot of research that needs to be done before you even write a business plan, and Stephen covers it in the very first chapter. The Team – Unique Key to Success After jumping right into the subject in the very first module, I wondered what Stephen could have in store for me for the rest of the course.  Chapter number two is building a team.  Having a team is important regardless of what your startup is.  You can be a true visionary with endless ideas and energy, but one person can still not do everything.  It is important to decide from the very beginning if you will have cofounders, team leaders, and how many employees you’ll need.  Even more important, you’ll need to decide what kind of team you want – what personalities, skills, and type of energy you want each of your employees to bring.  Do you want to have an A+ team with a B- idea, or do you have a B- idea that needs an A+ team to sell it?  Stephen asks all the hard questions!  I was especially impressed by his insight on developing.  You have to decide if you need developers, how many, and what their skills should be. I found this insight extremely useful for everyday usage, not just for start-up companies.  I would apply this kind of information in management at any position.  An amazing team will build an amazing product – and that doesn’t matter if you’re a start-up company or a small team working for a much larger business. Customer Development – The Ultimate Obective Chapter three was about customer development. According to Stephen, there are four different steps to develop a customer base.  The first question to ask yourself is if you are envisioning a large customer base buying a few products each, or a small, dedicated base that buys a lot of your product – quantity vs. Quality.  He also discusses how to earn, retain, and get more customers.  He also says that each customer should be placed in a different role – some will be like investors, who regularly spend with you and invest their money in your business.  It is then your job to take that investment and turn it into a better product in the future.  You need to deal with their money properly – think of it is as theirs as investors, not yours as profit.  At the end of this module I felt that only Stephen could provide this kind of insight, and then he listed all the resources he took his information from.  I have never seen a group of people so passionate about their customers. It was indeed a long day for me. In tomorrow’s part 2 we will discuss rest of the three module and also will see a quick video of the Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SharePoint 2010 Design & Deployment Best Practices

    - by Michael Van Cleave
    Well now that SharePoint 2010 has successfully launched and everyone is scratching for every piece of best practices information they can get their hands on, I would like to invite anyone and everyone to come and take part in ShareSquared's next webinar. The webinar will cover some key information such as: Pros and cons of the different approaches to installing and configuring SharePoint 2010 Configuration Best Practices for SharePoint 2010 farms Services architecture; dependencies, licensing, and topologies Information Architecture guidance for sizing, multilingual support, multi-tenancy, and more. Using tools such as SharePoint Composer and SharePoint Maestro to configure and deploy SharePoint 2010 And most of all, avoiding common pitfalls for installation and deployment. What is better than all of that? Well, the even more exciting thing is that the presenters will be our very own SharePoint MVP's Gary Lapointe and Paul Stork. If you don't know who these guys are then you should definitely check out their blogs and their contributions to the SharePoint community. To get more information and register click here: REGISTER Other great links to information in this post: ShareSquared, Inc Gary Lapointe's Blog Paul Stork's Blog SharePoint Composer Check it out and get up to speed from some of the best in the industry. Michael

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Frank Buytendijk on Prahalad, Business Best Practices

    - by Bob Rhubart
      In his video on the questionable value of some business best practices, Frank Buytendijk mentions a recent HBR article by business guru C.K. Prahalad. I just learned that Prahalad passed away this past weekend at the age of 68. (Information Week obit) A couple of years ago I had the good fortune to attend Mr. Prahalad’s keynote address at a Gartner event.  He had an audience of software architects absolutely mesmerized as he discussed technology’s role in the changing nature of business competition.  The often dysfunctional relationship between IT and business has and will probably always be hot-button issue. But during Prahalad’s keynote,  there was a palpable sense that the largely technical audience was having some kind of breakthrough, that they had achieved a new level of understanding about the importance of the relationship between the two camps. Fortunately, Prahalad leaves behind a significant body of work that will remain a valuable resource as business and the technology that supports it continues to evolve. Technorati Tags: business best practices,enterprise architecture,prahalad,oracle del.icio.us Tags: business best practices,enterprise architecture,prahalad,oracle

    Read the article

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

  • Linux Journal Best Virtualization Solution Readers' Choice 2012

    - by Chris Kawalek
    I'm proud to report that in the latest issue of Linux Journal their readers named Oracle VM VirtualBox the "Best Virtualization Solution" for 2012. We're excited to receive this honor and want to thank Linux Journal and their readers for recognizing us!  This is the latest award won by Oracle VM VirtualBox, following a 2011 Bossie Award (best open source software) from InfoWorld, a 2012 Readers' Choice award from Virtualization Review, and several others. These awards help us know that people are using Oracle VM VirtualBox in their day to day work and that it's really useful to them. We truly appreciate their (your!) support. If you already use Oracle VM VirtualBox, you will know all this. But, just in case you haven't tried it yet, here's a few reasons you should download it: Free for personal use and open source. You can download it in minutes and start running multiple operating systems on your Windows PC, Mac, Oracle Solaris system, or Linux PC. It's fast and powerful, and easy to install and use. It has in-depth support for client technologies like USB, virtual CD/DVD, virtual display adapters with various flavors of 2D and 3D acceleration, and much more. If you've ever found yourself in a situation where you were concerned about installing a piece of software because it might be too buggy, or wanted to have a dedicated system to test things on, or wanted to run Windows on a Mac or Oracle Solaris on a PC (or hundreds of other combinations!), or didn't want to install your company's VPN software directly on your home system, then you should definitely give Oracle VM VirtualBox a try. Once you install it, you'll find a myriad of other uses, too. Thanks again to the readers of Linux Journal for selecting Oracle VM VirtualBox as the Best Virtualization Solution for 2012. If you'd like to read the whole article, you can purchase this month's issue over at the Linux Journal website. -Chris

    Read the article

  • RHN Satellite / Spacewalk custom channels, best practice?

    - by tore-
    Hi, I'm currently setting up RHN Satellite, and all works well. I'm in the process of creating custom channels, since we have certain software which should be available for all nodes of satellite, e.g. puppet, facter, subversion, php (newer version than present in base). I've tried to find documentation on best practices on this. How should they be set up, how to handle different arch, how to handle noarch packages. How to sync updates to dependencies when updating a custom package in a custom channel (e.g. php is updated, how to fetch all updated dependencies). The channel management documentation from RHEL (http://www.redhat.com/docs/en-US/Red_Hat_Network_Satellite/5.3/Channel_Management_Guide/html/Channel_Management_Guide-Custom_Channel_and_Package_Management.html) doesn't provide me with enough information on how to solve any of theese issues. All tips, tricks and information regarding this would be great!

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • Snake Game Help

    - by MuhammadA
    I am making a snake game and learning XNA at the same time. I have 3 classes : Game.cs, Snake.cs and Apple.cs My problem is more of a conceptual problem, I want to know which class is really responsible for ... detecting collision of snake head on apple/itself/wall? which class should increase the snakes speed, size? It seems to me that however much I try and put the snake stuff into snake.cs that game.cs has to know a lot about the snake, like : -- I want to increase the score depending on size of snake, the score variable is inside game.cs, which means now I have to ask the snake its size on every hit of the apple... seems a bit unclean all this highly coupled code. or -- I DO NOT want to place the apple under the snake... now the apple suddenly has to know about all the snake parts, my head hurts when I think of that. Maybe there should be some sort of AppleLayer.cs class that should know about the snake... Whats the best approach in such a simple scenario? Any tips welcome. Game.cs : using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Design; namespace Snakez { public enum CurrentGameState { Playing, Paused, NotPlaying } public class Game1 : Microsoft.Xna.Framework.Game { private GraphicsDeviceManager _graphics; private SpriteBatch _spriteBatch; private readonly Color _niceGreenColour = new Color(167, 255, 124); private KeyboardState _oldKeyboardState; private SpriteFont _scoreFont; private SoundEffect _biteSound, _crashSound; private Vector2 _scoreLocation = new Vector2(10, 10); private Apple _apple; private Snake _snake; private int _score = 0; private int _speed = 1; public Game1() { _graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { _spriteBatch = new SpriteBatch(GraphicsDevice); _scoreFont = Content.Load<SpriteFont>("Score"); _apple = new Apple(800, 480, Content.Load<Texture2D>("Apple")); _snake = new Snake(Content.Load<Texture2D>("BodyBlock")); _biteSound = Content.Load<SoundEffect>("Bite"); _crashSound = Content.Load<SoundEffect>("Crash"); } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { Content.Unload(); } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { KeyboardState newKeyboardState = Keyboard.GetState(); if (newKeyboardState.IsKeyDown(Keys.Escape)) { this.Exit(); // Allows the game to exit } else if (newKeyboardState.IsKeyDown(Keys.Up) && !_oldKeyboardState.IsKeyDown(Keys.Up)) { _snake.SetDirection(Direction.Up); } else if (newKeyboardState.IsKeyDown(Keys.Down) && !_oldKeyboardState.IsKeyDown(Keys.Down)) { _snake.SetDirection(Direction.Down); } else if (newKeyboardState.IsKeyDown(Keys.Left) && !_oldKeyboardState.IsKeyDown(Keys.Left)) { _snake.SetDirection(Direction.Left); } else if (newKeyboardState.IsKeyDown(Keys.Right) && !_oldKeyboardState.IsKeyDown(Keys.Right)) { _snake.SetDirection(Direction.Right); } _oldKeyboardState = newKeyboardState; _snake.Update(); if (_snake.IsEating(_apple)) { _biteSound.Play(); _score += 10; _apple.Place(); } base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(_niceGreenColour); float frameRate = 1 / (float)gameTime.ElapsedGameTime.TotalSeconds; _spriteBatch.Begin(); _spriteBatch.DrawString(_scoreFont, "Score : " + _score, _scoreLocation, Color.Red); _apple.Draw(_spriteBatch); _snake.Draw(_spriteBatch); _spriteBatch.End(); base.Draw(gameTime); } } } Snake.cs : using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework; namespace Snakez { public enum Direction { Up, Down, Left, Right } public class Snake { private List<Rectangle> _parts; private readonly Texture2D _bodyBlock; private readonly int _startX = 160; private readonly int _startY = 120; private int _moveDelay = 100; private DateTime _lastUpdatedAt; private Direction _direction; private Rectangle _lastTail; public Snake(Texture2D bodyBlock) { _bodyBlock = bodyBlock; _parts = new List<Rectangle>(); _parts.Add(new Rectangle(_startX, _startY, _bodyBlock.Width, _bodyBlock.Height)); _parts.Add(new Rectangle(_startX + bodyBlock.Width, _startY, _bodyBlock.Width, _bodyBlock.Height)); _parts.Add(new Rectangle(_startX + (bodyBlock.Width) * 2, _startY, _bodyBlock.Width, _bodyBlock.Height)); _parts.Add(new Rectangle(_startX + (bodyBlock.Width) * 3, _startY, _bodyBlock.Width, _bodyBlock.Height)); _direction = Direction.Right; _lastUpdatedAt = DateTime.Now; } public void Draw(SpriteBatch spriteBatch) { foreach (var p in _parts) { spriteBatch.Draw(_bodyBlock, new Vector2(p.X, p.Y), Color.White); } } public void Update() { if (DateTime.Now.Subtract(_lastUpdatedAt).TotalMilliseconds > _moveDelay) { //DateTime.Now.Ticks _lastTail = _parts.First(); _parts.Remove(_lastTail); /* add new head in right direction */ var lastHead = _parts.Last(); var newHead = new Rectangle(0, 0, _bodyBlock.Width, _bodyBlock.Height); switch (_direction) { case Direction.Up: newHead.X = lastHead.X; newHead.Y = lastHead.Y - _bodyBlock.Width; break; case Direction.Down: newHead.X = lastHead.X; newHead.Y = lastHead.Y + _bodyBlock.Width; break; case Direction.Left: newHead.X = lastHead.X - _bodyBlock.Width; newHead.Y = lastHead.Y; break; case Direction.Right: newHead.X = lastHead.X + _bodyBlock.Width; newHead.Y = lastHead.Y; break; } _parts.Add(newHead); _lastUpdatedAt = DateTime.Now; } } public void SetDirection(Direction newDirection) { if (_direction == Direction.Up && newDirection == Direction.Down) { return; } else if (_direction == Direction.Down && newDirection == Direction.Up) { return; } else if (_direction == Direction.Left && newDirection == Direction.Right) { return; } else if (_direction == Direction.Right && newDirection == Direction.Left) { return; } _direction = newDirection; } public bool IsEating(Apple apple) { if (_parts.Last().Intersects(apple.Location)) { GrowBiggerAndFaster(); return true; } return false; } private void GrowBiggerAndFaster() { _parts.Insert(0, _lastTail); _moveDelay -= (_moveDelay / 100)*2; } } } Apple.cs : using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework; namespace Snakez { public class Apple { private readonly int _maxWidth, _maxHeight; private readonly Texture2D _texture; private readonly Random random = new Random(); public Rectangle Location { get; private set; } public Apple(int screenWidth, int screenHeight, Texture2D texture) { _maxWidth = (screenWidth + 1) - texture.Width; _maxHeight = (screenHeight + 1) - texture.Height; _texture = texture; Place(); } public void Place() { Location = GetRandomLocation(_maxWidth, _maxHeight); } private Rectangle GetRandomLocation(int maxWidth, int maxHeight) { // x and y -- multiple of 20 int x = random.Next(1, maxWidth); var leftOver = x % 20; x = x - leftOver; int y = random.Next(1, maxHeight); leftOver = y % 20; y = y - leftOver; return new Rectangle(x, y, _texture.Width, _texture.Height); } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(_texture, Location, Color.White); } } }

    Read the article

  • How to practice typing of programmer keys such as tilde, pipe and programmer quote?

    - by user7893
    It is nice that there are services such as TypeRacer where you can practice casual writing but I want to practice programmer keys, covers more numbers and keys not used by regular typist. There was some tutor with which I practiced some programmer keys and noticed that my speed dropped dramatically from 70-80 wpm to even about 15-30 wpm, it also trains different muscles. So how can I practice just programming keys with programming texts or just random code pieces?

    Read the article

  • Basic is Best

    - by Eric A. Stephens
    Fellow foodies will recognize the recent movement towards "farm-to-table" restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with "basic is best". I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right. I'm reminded of Frederick Brooks' book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more. I'm reminded of the enterprise architecture principle "Control Technical Diversity". At one firm I created pithy catch phrases for each principles. I named this one "Less is More". But perhaps another variation is what my server said the other night, "Basic is Best".

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • Is using build-in sorting considered cheating in practice tests?

    - by user10326
    I am using one of the practice online judges where a practice problem is asked and one submits the answer and gets back if it is accepted or not based on test inputs. My question is the following: In one of the practice tests, I needed to sort an array as part of the solution algorithm. If it matters the problem was: find 2 numbers in an array that add up to a specific target. As part of my algorithm I sorted the array, but to do that I used Java's quicksort and not implement sorting as part of the same method. To do that I had to do: java.util.Arrays.sort(array); Since I had to use the fully qualified name I am wondering if this is a kind of "cheating". (I mean perhaps an online judge does not expect this) Is it? In a formal interview (since these tests are practice for interview as I understand) would this be acceptable?

    Read the article

  • WSUS Updates - Best Practice

    - by What'sTheStoryWishBone
    We have an isolated enviornment of a few hundred servers in which we use WSUS to push updates too. We have thousands of updates which to manage and push to devices testing along the way to ensure the update will not break anything. What are the best practices that you all follow in your enteprise networks to ensure an update does not go out to all the machines that will break something? We currently have ours broken into customized groups for each type of machine. There is one "Test Group" which has one PC of each type which we apply updates to for error checking. Is this a similar procedure others follow or is their an easier safer way to manage the thousands of WSUS updates?

    Read the article

  • Best approach for tracking dependent state

    - by Pace
    Let's pretend I work on a project tracking application. The application is a database backed, server hosted, web application. In this application there are Projects which have many Activities which have many Tasks. A Task has two date fields an originalDueDate and a projectedDueDate. In addition, there are dynamic fields on the Activities and the Projects which indicate whether the Activity or Project is behind schedule based on the projected due dates of the child tasks and various other variables such as remaining buffer time, etc. There are a number of things that can cause the projectedDueDate to change. For example, an employee working on the project may (via a server request) enter in a shipping delay. Alternatively, a site may (via a server request) enter in an unexpected closure. When any of these things occur I need to not only update the projectedDueDate of the Task but also trigger the corresponding Project and Activity to update as well. What is the best way to do this? I've thought of the observer pattern but I don't keep a single copy of all these objects in memory. When a request comes in, I query the Task in from the database, at that point there is no associated Activity in memory that would be a listener. I could remove the ability to query for Tasks and force the application to query first by Project, then by Activity (in context of Project), then by task (in context of Activity) adding the observer relationships at each step but I'm not sure if that is the best way. I could setup a database event listening system so when a Task modified event is dispatched I have a handler which queries for the Activity at that point. I could simply setup a two-way relationship between Task and Activity so that the Task knows about the parent Activity and when the Task updates his state the Task grabs his parent and updates state. Right now I'm stuck considering all the options and am wondering if any single approach (doesn't have to be a listed approach) is jumping out at others as the best approach.

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ext JS 4.2.1 loading controller - best practice

    - by Hown_
    I am currently developing a Ext JS application with many views/controlers/... I am wondering myself what the best practice is for loading the JS controllers/views/and so on... currently i have my application defined like this: // enable javascript cache for debugging, otherwise Chrome breakpoints are lost Ext.Loader.setConfig({ disableCaching: false }); Ext.require('Ext.util.History'); Ext.require('app.Sitemap'); Ext.require('app.Error'); Ext.define('app.Application', { name: 'app', extend: 'Ext.app.Application', views: [ // TODO: add views here 'app.view.Viewport', 'app.view.BaseMain', 'app.view.Main', 'app.view.ApplicationHeader', //administration 'app.view.administration.User' ... ], controllers: [ 'app.controller.Viewport', 'app.controller.Main', 'app.controller.ApplicationHeader', //administration 'app.controller.administration.User', ... ], stores: [ // stores in there.. ] }); somehow this forces the client to load all my views and controllers at startup and is calling all init methods of all controllers of course.. i need to load data everytime i chnage my view.. and now i cant load it in my controllers init function. I would have to do something like this i assume: init: function () { this.control({ '#administration_User': { afterrender: this.onAfterRender } }); }, Is there a better way to do this? Or just an other event? Though the main thing i am questioning myself is if it is the best practice to load all the javascript at startup. Wouldnt it be better to only load the controllers/views/... which the client does need right now? Or should i load all the JS at startup? If i do want to load the controllers dynamicly how could i do this? I assume a would have to remove them from my application arrays (views, controllers, stores) and create an instance if i do need it and mby set the view in the controllers init?! What's best practice??

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • Django/Python best practice template_dict

    - by fredrik
    Hi, After just been coding for about 6-9 months. I probably changed my coding style a number of times after reading some code or read best practices. But one thing I haven't yet come a cross is a good why to populate the template_dict. As of now I pass the template_dict across a number of methods (that changes/modifies it) and returns is. The result is that every methods takes template_dict as first argument and the returns it and this in my eyes doesn't seems to be the best solution. An idea is to have a method that handles all the changes. But I'm curios if there's a best practice for this? Or is it "do what you feel like"-type of thing? The 2 things I think is pretty ugly is to send as an argument and return it in all methods. And the just the var name is written xxx number of times in the code :) ..fredrik

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >