Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 205/916 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • Your Job Search Should be More Than Just a New Year's Resolution

    - by david.talamelli
    I love the beginning of a new year, it is a great chance to refocus and either re-evaluate goals you are working to or even set new ones. I don't have any statistics to measure this but I am sure that one of the more popular new year's resolutions in the general workforce is to either get a new job or work to further develop one's career. I think this is a good idea, in today's competitive work force people should have a plan of what they want to do, what role they are after and how to get there. One common mistake I think many people make though is that a career plan shouldn't be a once a year thought. When people finish with the holiday season with their new year's resolution to find a new job fresh in their mind, you can see the enthusiasm and motivation a person has to make something happen. Emails are sent, calls are made, applications are made, networking is happening, etc..... Finding the right role that you are after however can be difficult, while it would be great if that dream role was available just at the time you happened to be looking for it - in reality this is not always the case. Job Seekers need to keep reminding themselves that while sometimes that dream job they are after is available at the same time they are looking, that also a Job search can be a difficult and long process. Many people who set out with the best of intentions in January to find a new job can soon lose interest in a job search if they do not immediately find a role. Just like the Christmas decorations are put away and the photos from New Year's are stored away - a Job Seeker's motivation may slowly decrease until that person finds themselves 12 months later in the same situation in same role and looking for that new opportunity again. Rather than just "going for it" and looking for a role in the month of January, a person's job search or career plan should be an ongoing activity and thought process that is constantly updated and evaluated over the course of the year. It can be hard to stay motivated over an extended period of time, especially when you are newly motivated and ready for that new role and the results are not immediate. Rather than letting your job search fall down the priority list and into the "too hard basket" a few ideas that may keep your enthusiasm fresh Update your resume every 6 months, even if you are not looking for a job - it is easy to forget what you have accomplished if you don't keep your details updated. Also it is good to be prepared and have a resume ready to go in case you do get an unexpected phone call for that 'dream job' you have been hoping for. Work out what you want out of your next role before you begin your job search - rather than aimlessly searching job ads or talking to people - think of the organisations or type of role you would like before you search. If you know what you are looking for it will be much easier to work out how to get there than if you do not know what you want. Don't expect immediate results once you decide to look for another job, things don't always fall into place. Timing and delivery can be important pieces of being selected for a role, companies don't hire every role in January. Have an open mind - people you meet or talk to may not result in immediate results for your job search but every connection may help you get a bit closer to what you are after . These actions will not guarantee a positive result, but in today's competitive work force every little of extra preparation and planning helps. All the best for 2011 and I hope your career plan whatever it may be is a success.

    Read the article

  • Instant Rename and Rename Refactoring

    - by Petr
    During the last weeks I have got  a few questions about rename refactoring and some users also complain to me that the refactoring in NetBeans 6.x was much faster. So I would like to explain the situation. For some people, who don't know, Instant Rename action and Rename Refactoring  can look like one action. But it's not true, even if  both actions use the same shortcut (CTRL + R). NetBeans 6.x contained only Instant Rename action (speaking about PHP support), which we can mark as very simple rename refactoring through one file. From NetBeans 7.0 the Instant Rename action works only in "non public" context. It means that this action is used for fast renaming variables that has local context like inside a method, or for renaming private methods and fields that can not be used outside of the scope, where they are declared. From user point of view these two action can be simply recognized. When is after CTRL+R called Instant Rename action, then the identifier is surrounded with rectangle and you can rename it directly in the file. It's fast and simple, also the usages of this identifier are renamed in the same time as you write. The picture below shows Instant Rename action for $message identifier, that is visible only in the print_test method and due this after CTRL+R is called Instant Rename. In NetBeans 7.0, there was added Rename Refactoring that is called for public identifiers. It means for identifiers that could be used in other files. If you press CTRL+R shortcut when the caret is inside $hello identifier from the picture above, NetBeans recognizes that $hello is declared / used in a global context and calls the Rename Refactoring that brings a dialog to change the name of the identifier. From this dialog you have to preview suggested changes, through pressing Preview button and then execute the refactoring through Do Refactoring button. Yes, it's more complicated from user point of view than Instant Rename, but in Rename Refactoring NetBeans can change more files at once. It should be  the developer responsibility to decide whether the suggested changes are right and the refactoring can be executed or in some files original name should be kept. Someone can argue that he doesn't use $hello variable in any other file so Instant Rename could be used in such case. Yes it's true, but in such case NetBeans has to know all usages of all identifiers and keep this informations up to date during editing a file. I'm sure that this is not possible due to the performance problems, mainly for big projects. So the usages are computed after pressing the Preview button. And why is the Refactor button always disabled in the Rename dialog and user has to always go through the preview phase? NetBeans has API and SPI for implementing refactoring actions and this dialog is a part of this infrastructure. If you rename an identifier for example in Java, the Refactor buttons is enabled, but Java is strongly type language and you can be almost in 99% sure that the IDE will suggest the right results. In PHP as a dynamic language, we can not be sure, what NetBeans finds is only a "guess". This is why NetBeans pushes developers to preview the changes for PHP rename. I hope that I have explain it clearly. I'm open to any discussion. What I have described above is situation in NetBeans 7.0, 7.0.1 and probably it will be also in NetBeans 7.1, because there is no plan to change it. Please write your opinion here.

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • How to prepare for an interview presentation!

    - by Tim Koekkoek
    During an interview process you might be asked to prepare a presentation as one of the steps in the recruitment process. Below, we want to give you some tips to help you prepare for what might be considered a daunting aspect of a recruitment process. Main purpose of the presentation Always keep in mind the main purpose of what the presentation is meant to convey. Generally speaking, an interview presentation is for the company to check if you have the ability to represent and sell the organization (and yourself), to the internal and external stakeholders in the position you are applying for. A presentation is often also part of the recruitment process to check whether you can structure and explain your experience and thoughts in a convincing manner. If you are unsure about the purpose of the presentation, feel free to ask your recruiter for more information. Preparation As with every task you do, preparation is key, so is the case with an interview presentation. It is important to know who your audience is. You have to adapt your presentation to your audience, ensuring that you are presenting the facts which they would want to hear. Furthermore, make sure you practice your presentation beforehand; this will make you more confident in your presenting skills. Also, estimate the length of your presentation as presentations or pitches during the recruitment process are often capped to a certain time limit. Structure Every presentation should have a beginning, middle and an end. Make sure you give an overview of your presentation and tell the audience what they can expect. Your presentation should have a logical order and a clear message. Always build up to your key message with strong arguments and evidence. When speaking about the topic, make sure you convey your points with conviction. Also be sure you believe the message you bring forward, if you don’t believe it yourself, then the audience definitely won’t! Delivery When you think back on successful presentations you have seen, the presenter was most likely always standing up. So if asked to do a presentation, follow this example and make sure you stand up as well. Standing up when you are doing your presentation shows confidence and control. Another important aspect in the delivery of the presentation is to relax and speak slowly and with enough volume for the audience to hear you. Speaking slowly allows the audience the time to absorb the information you are providing to them. Visual Using PowerPoint, or an equivalent, makes it very easy to have a visually attractive presentation. Make sure however that you take into account that the visual aids you use are there to support you, not for you to read word for word what is on the slides! Questions When starting your presentation, it is a good idea to tell your audience that you will deal with any questions they might have at the end of the presentation. This way it doesn’t interrupt your train of thought and the flow of your presentation. Answering questions at the end may give you additional opportunities to further expand on your facts in the topic. Some people might play the “devil’s advocate” role and confront you with opposing opinions, if this is the case, take your time to reiterate your points and remain professional in your response. A good way to deal with this is to start interacting with other members of the audience and ask for their opinions, so it will become a group discussion. This will also shows strong leadership skills, as you are open to discuss and ask for other opinions. If you are interested in more tips and tricks for your interview process, have a look at Competency Based Interview Tips and How to prepare for a Telephone Interview. For more information about Oracle and our vacancies, please visit our Facebook community and our careers website.

    Read the article

  • How To Delete Built-in Windows 7 Power Plans (and Why You Probably Shouldn’t)

    - by The Geek
    Do you actually use the Windows 7 power management features? If so, have you ever wanted to just delete one of the built-in power plans? Here’s how you can do so, and why you probably should leave it alone. Just in case you’re new to the party, we’re talking about the power plans that you see when you click on the battery/plug icon in the system tray. The problem is that one of the built-in plans always shows up there, even if you only use custom plans. When you go to “More power options” on the menu there, you’ll be taken to a list of them, but you’ll be unable to get rid of any of the built-in ones, even if you have your own. You can actually delete the power plans, but it will probably cause problems, so we highly recommend against it. If you still want to proceed, keep reading. Delete Built-in Power Plans in Windows 7 Open up an Administrator mod command prompt by right-clicking on the command prompt and choosing “Run as Administrator”, then type in the following command, which will show you a whole list of the plans. powercfg list Do you see that really long GUID code in the middle of each listing? That’s what we’re going to need for the next step. To make it easier, we’ll provide the codes here, just in case you don’t know how to copy to the clipboard from the command prompt. Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver) Before you do any deleting, what you’re going to want to do is export the plan to a file using the –export parameter. For some unknown reason, I used the .xml extension when I did this, though the file isn’t in XML format. Moving on… here’s the syntax of the command: powercfg –export balanced.xml 381b4222-f694-41f0-9685-ff5bb260df2e This will export the Balanced plan to the file balanced.xml. And now, we can delete the plan by using the –delete parameter, and the same GUID.  powercfg –delete 381b4222-f694-41f0-9685-ff5bb260df2e If you want to import the plan again, you can use the -import parameter, though it has one weirdness—you have to specify the full path to the file, like this: powercfg –import c:\balanced.xml Using what you’ve learned, you can export each of the plans to a file, and then delete the ones you want to delete. Why Shouldn’t You Do This? Very simple. Stuff will break. On my test machine, for example, I removed all of the built-in plans, and then imported them all back in, but I’m still getting this error anytime I try to access the panel to choose what the power buttons do: There’s a lot more error messages, but I’m not going to waste your time with all of them. So if you want to delete the plans, do so at your own peril. At least you’ve been warned! Similar Articles Productive Geek Tips Learning Windows 7: Manage Power SettingsCreate a Shortcut or Hotkey to Switch Power PlansDisable Power Management on Windows 7 or VistaChange the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateDisable Windows Vista’s Built-in CD/DVD Burning Features TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • New Release Overview Part 2

    - by brian.harrison
    To continue our discussion of the next release of WCI, lets take a look at a few other new features that have been developed and tested. Password Management With customer implementations starting to go more external, we were finding that these customers wanted to use the native users within the portal because the customer did not want to provide an LDAP server that is externally facing. However, the portal does not provide anything close to the same level of password policy that a standard LDAP environment would provide. With that being the case, we made the decision to provide the same kind of password policies directly within WCI that a standard LDAP environment would have. Password Expiration - In how many days will a password expire which will force the user to change their password? Also, in how many days prior to expiration with the user be notified that their password is about the expire? Password Rotation - How many of your previous passwords will you not be able to use when changing your password? Password Policies - What are the requirements for the password that is being created by the user? Number of Characters Numbers Required Symbols Required Capitalization Required Easily Configurable - Configuration is handled through the Portal Settings utility within Administration. All options are available on the main page of the utility. In addition to the configuration options that were mention above, there has also been a complete rewrite of the Change Password screen to provide better information to the user when they are changing their password. The Change Password will now provide a red light/green light listing of all the policies the user must meet for the changed password to be successful. As the user is typing the password, the red lights will change to green lights as the policies as met. In addition, text will show next to the password text box stating what policy has not been met yet. NOTE: The password policy functionality is not held within the User Editor page within Administration. We did not want to remove the option for Administrators to change a user's password on the fly in the case of a password reset situation. Miscellaneous Features In addition to the Password Management feature, there are a few other features that are related to WCI that should be mentioned. Consolidated Installer - Instead of having up to 12 or 13 different installers, one for each of the main products and separate services, we are going to only provide two installers. One that will be used for Collaboration and its respective images. The second will contain WCI and all of the relevant services required for a WCI architecture as well as the IDK, .NET App Accelerator, SharePoint Console as well as all Content Web Services and Identity Services. Updated Documentation - Most of us are aware that the documentation hasn't been properly kept up to date with the last couple of releases. We are doing everything that we can to remedy this with the next release by consolidating and reviewing everything that is available. We are making sure to fill in the gaps that are already there, add in all documentation for the functionality as well as clearing anything that is no longer valid based on the newly released version. I hope that you enjoyed reading through this new release information. Next time we will start to talk about the new functionality that will be available within the next release of Collaboration. If there is anything in particular that you would like to get more detail about, then please don't hesitate to send me a comment.

    Read the article

  • Clone a Hard Drive Using an Ubuntu Live CD

    - by Trevor Bekolay
    Whether you’re setting up multiple computers or doing a full backup, cloning hard drives is a common maintenance task. Don’t bother burning a new boot CD or paying for new software – you can do it easily with your Ubuntu Live CD. Not only can you do this with your Ubuntu Live CD, you can do it right out of the box – no additional software needed! The program we’ll use is called dd, and it’s included with pretty much all Linux distributions. dd is a utility used to do low-level copying – rather than working with files, it works directly on the raw data on a storage device. Note: dd gets a bad rap, because like many other Linux utilities, if misused it can be very destructive. If you’re not sure what you’re doing, you can easily wipe out an entire hard drive, in an unrecoverable way. Of course, the flip side of that is that dd is extremely powerful, and can do very complex tasks with little user effort. If you’re careful, and follow these instructions closely, you can clone your hard drive with one command. We’re going to take a small hard drive that we’ve been using and copy it to a new hard drive, which hasn’t been formatted yet. To make sure that we’re working with the right drives, we’ll open up a terminal (Applications > Accessories > Terminal) and enter in the following command sudo fdisk –l We have two small drives, /dev/sda, which has two partitions, and /dev/sdc, which is completely unformatted. We want to copy the data from /dev/sda to /dev/sdc. Note: while you can copy a smaller drive to a larger one, you can’t copy a larger drive to a smaller one with the method described below. Now the fun part: using dd. The invocation we’ll use is: sudo dd if=/dev/sda of=/dev/sdc In this case, we’re telling dd that the input file (“if”) is /dev/sda, and the output file (“of”) is /dev/sdc. If your drives are quite large, this can take some time, but in our case it took just less than a minute. If we do sudo fdisk –l again, we can see that, despite not formatting /dev/sdc at all, it now has the same partitions as /dev/sda.  Additionally, if we mount all of the partitions, we can see that all of the data on /dev/sdc is now the same as on /dev/sda. Note: you may have to restart your computer to be able to mount the newly cloned drive. And that’s it…If you exercise caution and make sure that you’re using the right drives as the input file and output file, dd isn’t anything to be scared of. Unlike other utilities, dd copies absolutely everything from one drive to another – that means that you can even recover files deleted from the original drive in the clone! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDHow to Browse Without a Trace with an Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var john = new Person("John Galt", 50); console.log(john.toString()); var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • Asynchrony in C# 5: Dataflow Async Logger Sample

    - by javarg
    Check out this (very simple) code examples for TPL Dataflow. Suppose you are developing an Async Logger to register application events to different sinks or log writers. The logger architecture would be as follow: Note how blocks can be composed to achieved desired behavior. The BufferBlock<T> is the pool of log entries to be process whereas linked ActionBlock<TInput> represent the log writers or sinks. The previous composition would allows only one ActionBlock to consume entries at a time. Implementation code would be something similar to (add reference to System.Threading.Tasks.Dataflow.dll in %User Documents%\Microsoft Visual Studio Async CTP\Documentation): TPL Dataflow Logger var bufferBlock = new BufferBlock<Tuple<LogLevel, string>>(); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); Note the filter applied to each link (in this case, the Logging Level selects the writer used). We can specify message filters using Predicate functions on each link. Now, the previous sample is useless for a Logger since Logging Level is not exclusive (thus, several writers could be used to process a single message). Let´s use a Broadcast<T> buffer instead of a BufferBlock<T>. Broadcast Logger var bufferBlock = new BroadcastBlock<Tuple<LogLevel, string>>(     e => new Tuple<LogLevel, string>(e.Item1, e.Item2)); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> allLogger =     new ActionBlock<Tuple<LogLevel, string>>(     e => Console.WriteLine("All: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.LinkTo(allLogger, e => (e.Item1 & LogLevel.All) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); As this block copies the message to all its outputs, we need to define the copy function in the block constructor. In this case we create a new Tuple, but you can always use the Identity function if passing the same reference to every output. Try both scenarios and compare the results.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • SOA Community Newsletter June 2013

    - by JuergenKress
    Dear SOA partner community member Thanks for showing us your interest to rerun the Fusion Middleware Summer Camps! After knowing your suggestions we are happy to announce the 3rd edition of our advanced Fusion Middleware training. The camps will take place from August 26th - 30th 2013 in Lisbon Portugal. Topics will include Adaptive Case Management (ACM) as part of BPM Suite, b2b, Advanced SOA and SOA Governance. Please make sure you plan and book your seat in advance - (Booking is on the basis of first come first seat!). Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the SOA Suite 11g Essentials or BPM Suite 11g Certified Implementation Specialist, you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a SOA or BPM specialization you can request a nice Plaques for your office. As part of our Industrial SOA article services we published “Canonizing a Language for Architecture” in the Service Technology Magazine and on Oracle Technology Network. If you write books or a blog - make sure you share it with us! Cloud Computing is the hottest topic in IT, specially as an architect you should be aware of the concepts and technology, therefore I highly recommend you Thomas Erl’s latest book named “Cloud Computing”. In the BPM space, Adaptive Case Management (ACM) is the hottest topic, with BPM PS6 the backend ACM functionality and an ACM sample application are available. You can even combine this hype with Customer Experience. The BPM section in this newsletter reflects the high importance of the topic and includes BPM PS6 video showing process lifecycle,BPM Resource Kit, Functional Testing, Introduction to Web Forms, Customized Workspace Application and Instance Patching Demo. B2B also become more and more popular in the Oracle SOA Suite. If you could not attend the training organized in the month May, we offer you an additional B2B training as a part of the Summer Camps or you can download the B2B training material from our SOA Community Workspace (SOA Community membership required). Thanks to all for sharing the valuable SOA content with our community! Special thanks to ec4u for the new reference of SOA Suite and AIA Foundation Pack at a Swiss insurance company. It is time to submit a SOA and BPM  reference request today! In this edition of the newsletter you will see Guido and Ronald's second part of OSB article series and Kathiravan Udayakumar's published an exclusive article on SOA Suite best practice. If you want to submit your content for the next edition of the Newsletter then please feel free to submit it to myself. The A-Team is an excellent contributor to the best practice - make sure you visit the new A-Team page and read their articles such as Getting to know Maven. Also on the SOA side, we have published many new articles from the community Oracle SOA Suite for the Busy IT Professional by Frank Munz, SOA Suite Knowledge - Polyglot Service Implementation with Groovy by Alexander Suchier, QA82 Analyzer - Automated Quality Assurance for Oracle SOA Suite Projects, Verifying the Target by Anthony Reynolds and a new book called Oracle SOA Governance 11g Implementation book by Luis Augusto Weir. Two new SOA on-demand training courses NEW - Oracle Business Rules Self-Study Course & Introduction Human Workflow online course are available now! Make use of the Summer Time and get trained - hope to see you in Lisbon for the Summer Camps! Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsJune2013 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress,SOA,BPM

    Read the article

  • Creating my first F# program in my new &ldquo;Expert F# Book&rdquo;

    - by MarkPearl
    So I have a brief hour or so that I can dedicate today to reading my F# book. It’s a public holiday and my wife’s birthday and I have a ton of assignments for UNISA that I need to complete – but I just had to try something in F#. So I read chapter 1 – pretty much an introduction to the rest of the book – it looks good so far. Then I get to chapter 2, called “Getting Started with F# and .NET”. Great, there is a code sample on the first page of the chapter. So I open up VS2010 and create a new F# console project and type in the code which was meant to analyze a string for duplicate words… #light let wordCount text = let words = Split [' '] text let wordset = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups   So… bad start - VS does not like the “Split” method. It gives me an error message “The value constructor ‘Split’ is not defined”. It also doesn’t like wordSet.Count telling me that the “namespace or module ‘wordSet’ is not defined”. ??? So a bit of googling and it turns out that there was a bit of shuffling of libraries between the CTP of F# and the Beta 2 of F#. To have access to the Split function you need to download the F# PowerPack and hen reference it in your code… I download and install the powerpack and then add the reference to FSharp.Core and FSharp.PowerPack in my project. Still no luck! Some more googling and I get the suggestions I got were something like this…#r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; So I add the code above to the top of my Program.fs file and still no joy… I now get an error message saying… Error    1    #r directives may only occur in F# script files (extensions .fsx or .fsscript). Either move this code to a script file, add a '-r' compiler option for this reference or delimit the directive with '#if INTERACTIVE'/'#endif'. So what does that mean? If I put the code straight into the F# interactive it works – but I want to be able to use it in a project. The C# equivalent I would think would be the “Using” keyword. The #r doesn’t seem like it should be in the FSharp code. So I try what the compiler suggests by doing the following…#if INTERACTIVE #r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; #endif No luck, the Split method is still not recognized. So wait a second, it mentioned something about FSharp.PowerPack.Compatibility.dll – I haven’t added this as a reference to my project so I add it and remove the two lines of #r code. Partial success – the Split method is now recognized and not underlined, but wordSet.Count is still not working. I look at my code again and it was a case error – the original wordset was mistyped comapred to the wordSet. Some case correction and the compiler is no longer complaining. So the code now seems to work… listed below…#light let wordCount text = let words = String.split [' '] text let wordSet = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups  So recap – if I wanted to use the interactive compiler then I need to put the #r code. In my mind this is the equivalent of me adding the the references to my project. If however I want to use the powerpack in a project – I just need to make sure that the correct references are there. I feel like a noob once again!

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • Does my use of the strategy pattern violate the fundamental MVC pattern in iOS?

    - by Goodsquirrel
    I'm about to use the 'strategy' pattern in my iOS app, but feel like my approach violates the somehow fundamental MVC pattern. My app is displaying visual "stories", and a Story consists (i.e. has @properties) of one Photo and one or more VisualEvent objects to represent e.g. animated circles or moving arrows on the photo. Each VisualEvent object therefore has a eventType @property, that might be e.g. kEventTypeCircle or kEventTypeArrow. All events have things in common, like a startTime @property, but differ in the way they are being drawn on the StoryPlayerView. Currently I'm trying to follow the MVC pattern and have a StoryPlayer object (my controller) that knows about both the model objects (like Story and all kinds of visual events) and the view object StoryPlayerView. To chose the right drawing code for each of the different visual event types, my StoryPlayer is using a switch statement. @implementation StoryPlayer // (...) - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { switch (event.eventType) { case kEventTypeCircle: [self showCircleEvent:event onStoryPlayerView:storyPlayerView]; break; case kEventTypeArrow: [self showArrowDrawingEvent:event onStoryPlayerView:storyPlayerView]; break; // (...) } But switch statements for type checking are bad design, aren't they? According to Uncle Bob they lead to tight coupling and can and should almost always be replaced by polymorphism. Having read about the "Strategy"-Pattern in Head First Design Patterns, I felt this was a great way to get rid of my switch statement. So I changed the design like this: All specialized visual event types are now subclasses of an abstract VisualEvent class that has a showOnStoryPlayerView: method. @interface VisualEvent : NSObject - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView; // abstract Each and every concrete subclass implements a concrete specialized version of this drawing behavior method. @implementation CircleVisualEvent - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView { [storyPlayerView drawCircleAtPoint:self.position color:self.color lineWidth:self.lineWidth radius:self.radius]; } The StoryPlayer now simply calls the same method on all types of events. @implementation StoryPlayer - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { [event showOnStoryPlayerView:storyPlayerView]; } The result seems to be great: I got rid of the switch statement, and if I ever have to add new types of VisualEvents in the future, I simply create new subclasses of VisualEvent. And I won't have to change anything in StoryPlayer. But of cause this approach violates the MVC pattern since now my model has to know about and depend on my view! Now my controller talks to my model and my model talks to the view calling methods on StoryPlayerView like drawCircleAtPoint:color:lineWidth:radius:. But this kind of calls should be controller code not model code, right?? Seems to me like I made things worse. I'm confused! Am I completely missing the point of the strategy pattern? Is there a better way to get rid of the switch statement without breaking model-view separation?

    Read the article

  • BizTalk 2009 - Architecture Decisions

    - by StuartBrierley
    In the first step towards implementing a BizTalk 2009 environment, from development through to live, I put forward a proposal that detailed the options available, as well as the costs and benefits associated with these options, to allow an informed discusion to take place with the business drivers and budget holders of the project.  This ultimately lead to a decision being made to implement an initial BizTalk Server 2009 environment using the Standard Edition of the product. It is my hope that in the long term, as projects require it and allow, we will be looking to implement my ideal recommendation of a multi-server enterprise level environment, but given the differences in cost and the likely initial work load for the environment this was not something that I could fully recommend at this time.  However, it must be noted that this decision was made in full awareness of the limits of the standard edition, and the business drivers of this project were made fully aware of the risks associated with running without the failover capabilities of the enterprise edition. When considering the creation of this new BizTalk Server 2009 environment, I have also recommended the creation of the following pre-production environments:   Usage Environment Development Development of solutions; Unit testing against technical specifications; Initial load testing; Testing of deployment packages;  Visual Studio; BizTalk; SQL; Client PCs/Laptops; Server environment similar to Live implementation; Test Testing of Solutions against business and technical requirements;  BizTalk; SQL; Server environment similar to Live implementation; Pseudo-Live As Live environment to allow testing against Live implementation; Acts as back-up hardware in case of failure of Live environment; BizTalk; SQL; Server environment identical to Live implementation; The creation of these differing environments allows for the separation of the various stages of the development cycle.  The development environment is for use when actively developing a solution, it is a potentially volatile environment whose state at any given time can not be guaranteed.  It allows developers to carry out initial tests in an environment that is similar to the live environment and also provides an area for the testing of deployment packages prior to any release to the test environment. The test environment is intended to be a semi-volatile environment that is similar to the live environment.  It will change periodically through the development of a solution (or solutions) but should be otherwise stable.  It allows for the continued testing of a solution against requirements without the worry that the environment is being actively changed by any ongoing development.  This separation of development and test is crucial in ensuring the quality and control of the tested solution. The pseudo-live environment should be considered to be an almost static environment.  It should mimic the live environment and can act as back up hardware in the case of live failure.  This environment acts as an area to allow for “as live” testing, where the performance and behaviour of the live solutions can be replicated.  There should be relatively few changes to this environment, with software releases limited to “release candidate” level releases prior to going live. Whereas the pseudo-live environment should always mimic the live environment, to save on costs the development and test servers could be implemented on lower specification hardware.  Consideration can also be given to the use of a virtual server environment to further reduce hardware costs in the development and test environments, indeed this virtual approach can also be extended to pseudo-live and live assuming the underlying technology is in place. Although there is no requirement for the development and test server environments to be identical to live, the overriding architecture implemented should be the same as in live and an understanding must be gained of the performance differences to be expected across the different environments.

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • Using 3G/UMTS in Mauritius

    After some conversation, threads in online forum and mailing lists I thought about writing this article on how to setup, configure and use 3G/UMTS connections on Linux here in Mauritius. Personally, I can only share my experience with Emtel Ltd. but try to give some clues about how to configure Orange as well. Emtel 3G/UMTS surf stick Emtel provides different surf sticks from Huawei. Back in 2007, I started with an E220 that wouldn't run on Windows Vista either. Nowadays, you just plug in the surf stick (ie. E169) and usually the Network Manager will detect the new broadband modem. Nothing to worry about. The Linux Network Manager even provides a connection profile for Emtel here in Mauritius and establishing the Internet connection is done in less than 2 minutes... even quicker. Using wvdial Old-fashioned Linux users might not take Network Manager into consideration but feel comfortable with wvdial. Although that wvdial is primarily used with serial port attached modems, it can operate on USB ports as well. Following is my configuration from /etc/wvdial.conf: [Dialer Defaults]Phone = *99#Username = emtelPassword = emtelNew PPPD = yesStupid Mode = 1Dial Command = ATDT[Dialer emtel]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","web"ISDN = 0Modem Type = Analog Modem The values of user name and password are optional and can be configured as you like. In case that your SIM card is protected by a pin - which is highly advised, you might another dialer section in your configuration file like so: [Dialer pin]Modem = /dev/ttyUSB0Init1 = AT+CPIN=0000 This way you can "daisy-chain" your command to establish your Internet connection like so: wvdial pin emtel And it works auto-magically. Depending on your group assignments (dialout), you might have to sudo the wvdial statement like so: sudo wvdial pin emtel Orange parameters As far as I could figure out without really testing it myself, it is also necessary to set the Access Point (AP) manually with Orange. Well, although it is pretty obvious a lot of people seem to struggle. The AP value is "orange". [Dialer orange]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","orange"ISDN = 0Modem Type = Analog Modem And you are done. Official Linux support from providers It's just simple: Forget it! The people at the Emtel call center are completely focused on the hardware and Mobile Connect software application provided by Huawei and are totally lost in case that you confront them with other constellations. For example, my wife's netbook has an integrated 3G/UMTS modem from Ericsson. Therefore, no need to use the Huawei surf stick at all and of course we use the existing software named Wireless Manager instead of. Now, imagine to mention at the help desk: "Ehm, sorry but what's Mobile Connect?" And Linux after all might give the call operator sleepless nights... Who knows? Anyways, I hope that my article and configuration could give you a helping hand and that you will be able to connect your Linux box with 3G/UMTS surf sticks here in Mauritius.

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQLUG Events - London/Edinburgh/Cardiff/Reading - Masterclass, NoSQL, TSQL Gotcha's, Replication, BI

    - by tonyrogerson
    We have acquired two additional tickets to attend the SQL Server Master Class with Paul Randal and Kimberly Tripp next Thurs (17th June), for a chance to win these coveted tickets email us ([email protected]) before 9pm this Sunday with the subject "MasterClass" - people previously entered need not worry - your still in with a chance. The winners will be announced Monday morning.As ever plenty going on physically, we've got dates for a stack of events in Manchester and Leeds, I'm looking at Birmingham if anybody has ideas? We are growing our online community with the Cuppa Corner section, to participate online remember to use the #sqlfaq twitter tag; for those wanting to get more involved in presenting and fancy trying it out we are always after people to do 1 - 5 minute SQL nuggets or Cuppa Corners (short presentations) at any of these User Group events - just email us [email protected] removing from this email list? Then just reply with remove please on the subject line.Kimberly Tripp and Paul Randal Master Class - Thurs, 17th June - LondonREGISTER NOW AND GET A SECOND REGISTRATION FREE*The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance.The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:>> Debunk many of the ingrained misconceptions around SQL Server's behaviour >> Show you disaster recovery techniques critical to preserving your company's life-blood - the data >> Explain how a common application design pattern can wreak havoc in the database >> Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Where: Radisson Edwardian Heathrow Hotel, LondonWhen: Thursday 17th June 2010*REGISTER TODAY AT www.regonline.co.uk/kimtrippsql on the registration form simply quote discount code: BOGOF for both yourself and your colleague and you will save 50% off each registration – that’s a 249 GBP saving! This offer is limited, book early to avoid disappointment.Wed, 23 JunREADINGEvening Meeting, More info and registerIntroduction to NoSQL (Not Only SQL) - Gavin Payne; T-SQL Gotcha's and how to avoid them - Ashwani Roy; Introduction to Recency Frequency - Tony Rogerson; Reporting Services - Tim LeungThu, 24 JunCARDIFFEvening Meeting, More info and registerAlex Whittles of Purple Frog Systems talks about Data warehouse design case studies, Other BI related session TBC Mon, 28 JunEDINBURGHEvening Meeting, More info and registerReplication (Components, Adminstration, Performance and Troubleshooting) - Neil Hambly Server Upgrades (Notes and Best practice from the field) - Satya Jayanty Wed, 14 JulLONDONEvening Meeting, More info and registerMeeting is being sponsored by DBSophic (http://www.dbsophic.com/download), database optimisation software. Physical Join Operators in SQL Server - Ami LevinWorkload Tuning - Ami LevinSQL Server and Disk IO (File Groups/Files, SSD's, Fusion-IO, In-RAM DB's, Fragmentation) - Tony RogersonComplex Event Processing - Allan MitchellMany thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com"

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >