Search Results

Search found 17972 results on 719 pages for 'always on'.

Page 125/719 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Instant Rename and Rename Refactoring

    - by Petr
    During the last weeks I have got  a few questions about rename refactoring and some users also complain to me that the refactoring in NetBeans 6.x was much faster. So I would like to explain the situation. For some people, who don't know, Instant Rename action and Rename Refactoring  can look like one action. But it's not true, even if  both actions use the same shortcut (CTRL + R). NetBeans 6.x contained only Instant Rename action (speaking about PHP support), which we can mark as very simple rename refactoring through one file. From NetBeans 7.0 the Instant Rename action works only in "non public" context. It means that this action is used for fast renaming variables that has local context like inside a method, or for renaming private methods and fields that can not be used outside of the scope, where they are declared. From user point of view these two action can be simply recognized. When is after CTRL+R called Instant Rename action, then the identifier is surrounded with rectangle and you can rename it directly in the file. It's fast and simple, also the usages of this identifier are renamed in the same time as you write. The picture below shows Instant Rename action for $message identifier, that is visible only in the print_test method and due this after CTRL+R is called Instant Rename. In NetBeans 7.0, there was added Rename Refactoring that is called for public identifiers. It means for identifiers that could be used in other files. If you press CTRL+R shortcut when the caret is inside $hello identifier from the picture above, NetBeans recognizes that $hello is declared / used in a global context and calls the Rename Refactoring that brings a dialog to change the name of the identifier. From this dialog you have to preview suggested changes, through pressing Preview button and then execute the refactoring through Do Refactoring button. Yes, it's more complicated from user point of view than Instant Rename, but in Rename Refactoring NetBeans can change more files at once. It should be  the developer responsibility to decide whether the suggested changes are right and the refactoring can be executed or in some files original name should be kept. Someone can argue that he doesn't use $hello variable in any other file so Instant Rename could be used in such case. Yes it's true, but in such case NetBeans has to know all usages of all identifiers and keep this informations up to date during editing a file. I'm sure that this is not possible due to the performance problems, mainly for big projects. So the usages are computed after pressing the Preview button. And why is the Refactor button always disabled in the Rename dialog and user has to always go through the preview phase? NetBeans has API and SPI for implementing refactoring actions and this dialog is a part of this infrastructure. If you rename an identifier for example in Java, the Refactor buttons is enabled, but Java is strongly type language and you can be almost in 99% sure that the IDE will suggest the right results. In PHP as a dynamic language, we can not be sure, what NetBeans finds is only a "guess". This is why NetBeans pushes developers to preview the changes for PHP rename. I hope that I have explain it clearly. I'm open to any discussion. What I have described above is situation in NetBeans 7.0, 7.0.1 and probably it will be also in NetBeans 7.1, because there is no plan to change it. Please write your opinion here.

    Read the article

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • Review: ComponentOne Studio for Entity Framework

    - by Tim Murphy
    While I have always been a fan of libraries that improve coding efficiency and reduce code redundancy I have mostly been using ones that were in the public domain.  As part of the Geeks With Blogs Influencers program a got my hands on ComponentOne’s Studio for Entity Framework.  Below are my thought after working with the product for several weeks. My coding preference has always been maintainable code that is reusable across an enterprises protfolio.  Because of this my focus in reviewing this product is less on the RAD components and more on its benefits for layered applications using code first Entity Framework. Before we get into the pros and cons here is a summary of the main feature listed for SEF. Unified Data Context Virtual Data Access More Powerful Data Binding Pros The first thing that I found to my liking is the C1DataSource. It basically manages a cache for your Entity Model context.  Under RAD conditions this is setup automatically when you drop the object on a your design surface.  If you are like me and want to abstract you data management into a library it takes a little more work, but it is still acceptable and gains the same benefits. The second feature that I found beneficial is the definition of views with improved sorting and filtering.  Again the ease of use of these features is greater on the RAD side but no capabilities are missing when manipulating object in code. Linq has become my friend over the last couple of years and it was great to see that ComponentOne had ensured that it remained a first class citizen in their design.  When you look into this product yourself I would suggest taking a dive into LiveLinq which allow the joining of different data source types. As I went through discovering the features of this framework I appreciated the number of examples that they supplied for different uses.  Besides showing how to use SEF with WinForms, WPF and Silverlight they also showed how to accomplish tasks both RAD, code only and MVVM approaches. Cons The only area that I would really like to see improvement is in there level of detail in their documentation.  Specifically I would like to have seen some of the supporting code explained, such as what some supporting object did, in the examples instead of having to go to the programmer’s reference. I did find some times where currently existing projects had some trouble determining scope that the RAD controls were allowed, but I expect this is something that is in part end user related. Summary Overall I found the Studio for Entity Framework capable and well thought out.  If you are already using the Entity Framework this product will fit into your environment with little effort in return for greater flexibility and greater robustness in your solutions. Whether the $895 list price for a standard version works for you will depend on your return on investment. Smaller companies with only a small number of projects may not be able to stomach it, you get a full featured product that is supported by a well established company.  The more projects and the more code you have the greater your return on investment will be. Personally I intend to apply this product to some production systems and will probably have some tips and tricks in the future. del.icio.us Tags: ComponentOne,Studio for Entity Framework,Geeks With Blogs,Influencers,Product Reviews

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The Oracle Retail Week Awards - most exciting awards yet?

    - by sarah.taylor(at)oracle.com
    Last night's annual Oracle Retail Week Awards saw the UK's top retailers come together to celebrate the very best of our industry over the last year.  The Grosvenor House Hotel on Park Lane in London was the setting for an exciting ceremony which this year marked several significant milestones in British - and global - retail.  Check out our videos about the event at our Oracle Retail YouTube channel, and see if you were snapped by our photographer on our Oracle Retail Facebook page. There were some extremely hot contests for many of this year's awards - and all very deserving winners.  The entries have demonstrated beyond doubt that retailers have striven to push their standards up yet again in all areas over the past year.  The judging panel includes some of the most prestigious names in the retail industry - to impress the panel enough to win an award is a substantial achievement.  This year the panel included the likes of Andy Clarke - Chief Executive of ASDA Group; Mark Newton Jones - CEO of Shop Direct Group; Richard Pennycook - the finance director at Morrisons; Rob Templeman - Chief Executive of Debenhams; and Stephen Sunnucks - the president of Gap Europe.  These are retail veterans  who have each helped to shape the British High Street over the last decade.  It was great to chat with many of them in the Oracle VIP area last night.  For me, last night's highlight was honouring both Sir Stuart Rose and Sir Terry Leahy for their contributions to the retail industry.  Both have set the standards in retailing over the last twenty years and taken their respective businesses from strength to strength, demonstrating that there is always a need for innovation even in larger businesses, and that a business has to adapt quickly to new technology in order to stay competitive.  Sir Terry Leahy's retirement this year marks the end of an era of global expansion for the Tesco group and a milestone in the progression of British retail.  Sir Terry has helped steer Tesco through nearly 20 years of change, with 14 years as Chief Executive.  During this time he led the drive for international expansion and an aggressive campaign to increase market share.  He has led the way for High Street retailers in adapting to the rise of internet retailing and nurtured a very successful home delivery service.  More recently he has pioneered the notion of cross-channel retailing with the introduction of Tesco apps for the iPhone and Android mobile phones allowing customers to scan barcodes of items to add to a shopping list which they can then either refer to in store or order for delivery.  John Lewis Partnership was a very deserving winner of The Oracle Retailer of the Year award for their overall dedication to excellent retailing practices.  The business was also named the American Express Marketing/Advertising Campaign of the Year award for their memorable 'Never Knowingly Undersold' advert series, which included a very successful viral video and radio campaign with Fyfe Dangerfield's cover of Billy Joel's 'She's Always a Woman' used for the adverts.  Store Design of the Year was another exciting category with Topshop taking the accolade for its flagship Oxford Street store in London, which combines boutique concession-style stalls with high fashion displays and exclusive collections from leading designers.  The store even has its own hairdressers and food hall, making it a truly all-inclusive fashion retail experience and a global landmark for any self-respecting international fashion shopper. Over the next few weeks we'll be exploring some of the winning entries in more detail here on the blog, so keep an eye out for some unique insights into how the winning retailers have made such remarkable achievements. 

    Read the article

  • Doing your first mock with JustMock

    - by mehfuzh
    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService {     float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed in a DI style] looks like: public class AccountService : IAccountService         {             private readonly ICurrencyService currencyService;               public AccountService(ICurrencyService currencyService)             {                 this.currencyService = currencyService;             }               #region IAccountService Members               public void TransferFunds(Account from, Account to, float amount)             {                 from.Withdraw(amount);                 float conversionRate = currencyService.GetConversionRate(from.Currency, to.Currency);                 float convertedAmount = amount * conversionRate;                 to.Deposit(convertedAmount);             }               #endregion         }   As, we can see there is a TransferFunds action implemented from IAccountService  takes in a source account from where it withdraws some money and a target account to where the transfer takes place using the provided conversion rate. Our first step is to create the mock. The syntax for creating your instance mocks is pretty much same and  is valid for all interfaces, non-sealed/sealed concrete instance classes. You can pass in additional stuffs like whether its an strict mock or not, by default all the mocks in JustMock are loose, you can use it as default valued objects or stubs as well. ICurrencyService currencyService = Mock.Create<ICurrencyService>(); Using JustMock, setting up your expectations and asserting them always goes with Mock.Arrang|Assert and this is pretty much same syntax no matter what type of mocking you are doing. Therefore,  in the above scenario we want to make sure that the conversion rate always returns 2.20F when converting from GBP to CAD. To do so we need to arrange in the following way: Mock.Arrange(() => currencyService.GetConversionRate("GBP", "CAD")).Returns(2.20f).MustBeCalled(); Here, I have additionally marked the mock call as must. That means it should be invoked anywhere in the code before we do Mock.Assert, we can also assert mocks directly though lamda expressions  but the more general Mock.Assert(mocked) will assert only the setups that are marked as "MustBeCalled()”. Now, coming back to the main topic , as we setup the mock, now its time to act on it. Therefore, first we create our account service class and create our from and to accounts respectively. var accountService = new AccountService(currencyService);   var canadianAccount = new Account(0, "CAD"); var britishAccount = new Account(0, "GBP"); Next, we add some money to the GBP  account: britishAccount.Deposit(100); Finally, we do our transfer by the following: accountService.TransferFunds(britishAccount, canadianAccount, 100); Once, everything is completed, we need to make sure that things were as it is we have expected, so its time for assertions.Here, we first do the general assertions: Assert.Equal(0, britishAccount.Balance); Assert.Equal(220, canadianAccount.Balance); Following, we do our mock assertion,  as have marked the call as “MustBeCalled” it will make sure that our mock is actually invoked. Moreover, we can add filters like how many times our expected mock call has occurred that will be covered in coming posts. Mock.Assert(currencyService); So far, that actually concludes our  first  mock with JustMock and do stay tuned for more. Enjoy!!

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • SQLAuthority News – Learning Never Ends – Becoming Student Again

    - by pinaldave
    From my past few blog posts you may see a pattern – learning.  I finished my own college education a few years ago, but I firmly believe that learning should never stop.  We can learn on the job, or from outside reading, but we should always try to be learning new things.  It keeps the brain sharp!  In fact, I often find myself learning new things from reviewing old material.  If you have been reading my blog lately, you will recognize the name Koenig Solutions. You might also be rolling your eyes at me and my enthusiasm for learning and training.  College was hard work, why continue it?  Didn’t we all get educations so that we could get jobs and go on vacation? Of course, having a job means that you cannot take vacations all the time.  I have often asked my friend who owns Koenig, jokingly, when he is going to open a Koenig center in Bangalore. I relocated to Bangalore 1.5 years ago, so I wanted a center I could walk to anytime.  Last week I was very happy to hear that they have opened a center in Bangalore. Pinal Dave at Friend’s Company I could not let a new center open without visiting it and congratulating my friend, so I recently stopped by.  I was immediately taken by the desire to go back to “school” and learn something new.  I have signed up to take a continuing education course through the new Koenig center and here is the exciting part: I will be blogging about it so that you all can be inspired to learn, too!  Keep checking back here for further updates and blog posts about my learning experience. However, what is the fun to attend the session in the town where you stay. I indeed visited their center in Bangalore but I have opted to learn the course in another city. Well, more information about the same in near future. Pinal Dave is going to be a student again Honestly, why not learn new things and become more confident?  When we have more education we will become better at our jobs, which can lead to more confidence and efficiency, but may also have more physical rewards – like a raise or promotion.  We don’t always have to focus on shallow rewards like money and recognition, so think about how much more you will enjoy your work when you know more about it.  Koenig is offering training for new certificates in SQL Server 2012, and I am planning on investigating these for sure. I feel good that I am going to be a student again and will be learning new stuff. As I said I will blog my experience as I go. I hope that my continuing education blog posts will inspire you, my readers, to go out and learn more.  I am serious about my education and my goal is to prove how serious I am here, on my blog. I am a big fan of Learning and Sharing and I hope this series will inspire you to learn new technology which can help you progress in your career and help balance your life with work. Note: This blog post is about what inspired me to sign up for learning course. Becoming student should be the attitude of a lifetime. This post is not about a career change. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING

    - by pinaldave
    Yesterday I had discussed two analytical functions FIRST_VALUE and LAST_VALUE. After reading the blog post I received very interesting question. “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” I find this question very interesting because this is very commonly made mistake. No there is no bug in the code. I think what we need is a bit more explanation. Let me attempt that first. Before you do that I suggest you read yesterday’s blog post as this question is related to that blog post. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: As per the reader’s question the value of the LAST_VALUE function should be always 114 and not increasing as the rows are increased. Let me re-write the above code once again with bit extra T-SQL Syntax. Please pay special attention to the ROW clause which I have added in the above syntax. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Now once again check the result of the above query. The result of both the query is same because in OVER clause the default ROWS selection is always UNBOUNDED PRECEDING AND CURRENT ROW. If you want the maximum value of the windows with OVER clause you need to change the syntax to UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING for ROW clause. Now run following query and pay special attention to ROW clause again. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Here is the resultset of the above query which is what questioner was asking. So in simple word, there is no bug but there is additional syntax needed to add to get your desired answer. The same logic also applies to PARTITION BY clause when used. Here is quick example of how we can further partition the query by SalesOrderDetailID with this new functions. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us windowed resultset on SalesOrderDetailsID as well give us FIRST and LAST value for the windowed resultset. There are lots to discuss for this two functions and we have just explored tip of the iceberg. In future post I will discover it further deep. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Knowledge Management Feedback

    - by Robert Schweighardt
    Did you know that you can provide feedback on Knowledge Management (KM) articles? It's nice to read a technical article that is well-written, the grammar and spelling are correct, the information is up to date, concise, to the point, easy to understand and it flows from one paragraph to another.  And though we always strive for a well-written article, it doesn't always come out that way. Knowledge Management articles are written by Oracle Support Engineers and we welcome your feedback.  Providing feedback helps to improve Oracle's Knowledge Base.  If you're reading a KM article and you have a comment, please let us know about it.  Maybe it's just to fix a spelling or grammatical error.  Maybe there's a broken link that needs to be fixed.  Maybe it's a suggestion to provide additional information.  Maybe the article contains incorrect information.  Maybe some information in the article is outdated.  Maybe something is not clear in the article.  Whatever it is, we want to hear about it.  We value your input! When you provide feedback it goes directly to the owner of the article.  The owner carefully reviews the comment and decides whether or not to implement it.  Most comments are implemented and we strive to implement them within a week!  For those comments that are not implemented, there is normally a good reason.  It may not be feasible to implement the suggestion or the suggestion may not be correct.  We don't take the decision lightly! So how do you provide feedback? Providing feedback on a KM article depends on whether you're a customer or an Oracle Employee. Customer 1. In the upper right hand corner of the article, click on the little +/- Rate this document icon: Note: The grayed out Comments (0) link will only show a number when there are open comments that are still being evaluated. 2. In the Article Rating window, complete as many of the following optional fields as you like and then click the Send Rating button: Rate the article as Excellent, Good or Poor Specify whether the article helped you or not Specify the ease of finding the article Provide whatever comments you have Employee The interface for Oracle Employees is a little bit different, there are more options. 1. The +/- Rate this document icon is also available to employees and is identical to what the customers have.  Please see Customer section above. 2. The Show document comments link shows all comments that have ever been submitted for the article 3. Employees have an additional way to submit a comment.  Click on the little + Add Comment icon: 4. Fill out the Add Comment fields and click the Add Comment button: We look forward to your feedback!

    Read the article

  • Java EE 6 Pocket Guide from O'Reilly - Now Available in Paperback and Kindle Edition

    - by arungupta
    Hot off the press ... Java EE 6 Pocket Guide from 'OReilly Media is now available in Paperback and Kindle Edition. Here are the book details: Release Date: Sep 21, 2012 Language: English Pages: 208 Print ISBN: 978-1-4493-3668-4 | ISBN 10:1-4493-3668-X Ebook ISBN:978-1-4493-3667-7 | ISBN 10:1-4493-3667-1 The book provides a comprehensive summary of the Java EE 6 platform. Main features of different technologies from the platform are explained and accompanied by tons of samples. A chapter is dedicated to Managed Beans, Servlets, Java Persistence API, Enterprise JavaBeans, Contexts and Dependency Injection, JavaServer Faces, SOAP-Based Web Services, RESTful Web Services, Java Message Service, and Bean Validation in that format. Many thanks to Markus Eisele, John Yeary, and Bert Ertman for reviewing and providing valuable comments. This book was not possible without their extensive feedback! This book was mostly written by compiling my blogs, material from 2-day workshops, and several hands-on workshops around the world. The interactions with users of different technologies and whiteboard discussions with different specification leads helped me understand the technology better. Many thanks to them for helping me be a better user! The long international flights during my travel around the world proved extremely useful for authoring the content. No phone, no email, no IM, food served on the table, power outlet = a perfect recipe for authoring ;-) Markus wrote a detailed review of the book. He was one of the manuscript reviewers of the book as well and provided valuable guidance. Some excerpts from his blog: It covers the basics you need to know of Java EE 6 and gives good examples of all relevant parts. ... This is a pocket guide which is comprehensively written. I could follow all examples and it was a good read overall. No complicated constructs and clear writing. ... GO GET IT! It is the only book you probably will need about Java EE 6! It is comprehensive, wonderfully written and covers everything you need in your daily work. It is not a complete reference but provides a great shortcut to the things you need to know. To me it is a good beginners guide and also works as a companion for advanced users. Here is the first tweet feedback ... Jeff West was super prompt to place the first pre-order of my book, pretty much the hour it was announced. Thank you Jeff! @mike_neck posted the very first tweet about the book, thanks for that! The book is now available in Paperback and Kindle Edition from the following websites: O'Reilly Media (Ebook, Print & Ebook, Print) Amazon.com (Kindle Edition and Paperback) Barnes and Noble Overstock (1% off Amazon) Buy.com Booktopia.com Tower Books Angus & Robertson Shopping.com Here is how I can use your help: Help spread the word about the book If you bought a Paperback or downloaded Kindle Edition, then post your review here. If you have not bought, then you can buy it at amazon.com and multiple other websites mentioned above. If you are coming to JavaOne, you'll have an opportunity to get a free copy at O'Reilly's booth on Monday (October 1) from 2-3pm. And you can always buy it from the JavaOne Bookstore. I hope you enjoy reading it and learn something new from it or hone your existing skills. As always, looking forward to your feedback!

    Read the article

  • Rules and advice for logging?

    - by Nick Rosencrantz
    In my organization we've put together some rules / guildelines about logging that I would like to know if you can add to or comment. We use Java but you may comment in general about loggin - rules and advice Use the correct logging level ERROR: Something has gone very wrong and need fixing immediately WARNING: The process can continue without fixing. The application should tolerate this level but the warning should always get investigated. INFO: Information that an important process is finished DEBUG. Is only used during development Make sure that you know what you're logging. Avoid that the logging influences the behavior of the application The function of the logging should be to write messages in the log. Log messages should be descriptive, clear, short and concise. There is not much use of a nonsense message when troubleshooting. Put the right properties in log4j Put in that the right method and class is written automatically. Example: Datedfile -web log4j.rootLogger=ERROR, DATEDFILE log4j.logger.org.springframework=INFO log4j.logger.waffle=ERROR log4j.logger.se.prv=INFO log4j.logger.se.prv.common.mvc=INFO log4j.logger.se.prv.omklassning=DEBUG log4j.appender.DATEDFILE=biz.minaret.log4j.DatedFileAppender log4j.appender.DATEDFILE.layout=org.apache.log4j.PatternLayout log4j.appender.DATEDFILE.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%C{1}.%M] - %m%n log4j.appender.DATEDFILE.Prefix=omklassning. log4j.appender.DATEDFILE.Suffix=.log log4j.appender.DATEDFILE.Directory=//localhost/WebSphereLog/omklassning/ Log value. Please log values from the application. Log prefix. State which part of the application it is that the logging is written from, preferably with something for the project agreed prefix e.g. PANDORA_DB The amount of text. Be careful so that there is not too much logging text. It can influence the performance of the app. Loggning format: -There are several variants and methods to use with log4j but we would like a uniform use of the following format, when we log at exceptions: logger.error("PANDORA_DB2: Fel vid hämtning av frist i TP210_RAPPORTFRIST", e); In the example above it is assumed that we have set log4j properties so that it automatically write the class and the method. Always use logger and not the following: System.out.println(), System.err.println(), e.printStackTrace() If the web app uses our framework you can get very detailed error information from EJB, if using try-catch in the handler and logging according to the model above: In our project we use this conversion pattern with which method and class names are written out automatically . Here we use two different pattents for console and for datedfileappender: log4j.appender.CONSOLE.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.DATEDFILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n In both the examples above method and class wioll be written out. In the console row number will also be written our. toString() Please have a toString() for every object. EX: @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(" DwfInformation [ "); sb.append("cc: ").append(cc); sb.append("pn: ").append(pn); sb.append("kc: ").append(kc); sb.append("numberOfPages: ").append(numberOfPages); sb.append("publicationDate: ").append(publicationDate); sb.append("version: ").append(version); sb.append(" ]"); return sb.toString(); } instead of special method which make these outputs public void printAll() { logger.info("inbet: " + getInbetInput()); logger.info("betdat: " + betdat); logger.info("betid: " + betid); logger.info("send: " + send); logger.info("appr: " + appr); logger.info("rereg: " + rereg); logger.info("NY: " + ny); logger.info("CNT: " + cnt); } So is there anything you can add, comment or find questionable with these ways of using the logging? Feel free to answer or comment even if it is not related to Java, Java and log4j is just an implementation of how this is reasoned.

    Read the article

  • Welcome to the Oracle Retail International Blog

    - by sarah.taylor(at)oracle.com
    Welcome to the first post of the new Oracle Retail International Blog. Retail is an international business and today's successful retailers view themselves in the context of a global market. A niche fashion business in Tokyo will learn marketing strategies from the luxury brands of Milan, an independent grocer in Oslo will source the same global brands as a supermarket in Oklahoma, and every retailer in the world will measure their multi-channel operation against the international e-commerce giant Amazon.  Why? Because today's customer is a global customer with unparalleled expectations on choice, price and service. Today's consumers have access to more information on retail than ever before. Technology allows people to shop from their home, their office or from the phone in their pocket, wherever they are and at whatever time suits them. Customers are using the web to search for products and promotions. They are also using the web to develop their voice in commenting on products and services that have delighted or disappointed. In an information rich industry, this customer element creates a new world of data. The best retailers are developing eagle eyes for reading customer activity and turning it into profitable decisions. Ultimately, whether you choose to compete or shop on price, service, product innovation, excellent operations or all of the above - the international world of retail has become an inspiration for all - retailer and consumer alike.  Retail as an industry is growing and diversifying at a faster rate than ever before. Yet it is still the customer who picks the winners and the losers on the retail field. Economic circumstances transform the rules, but it is still the customer who dictates the game, the pace, the price, and the perception of the brand. Wise retailers never rest on their laurels. They are always shopping for ideas on how to improve and differentiate the offer at every touch point to meet the customer's needs better than anyone else and to gain each customer's loyalty at a time when loyalty can be cheap. With this blog, I hope that we might provide a hub for discussion around what unifies retail and how technology supports both the retailer and customer experience. Despite the competitive nature of this market, we hope that this will provide an opportunity to share experiences and lessons learnt with a view that knowledge can only help this industry to grow and develop. At Oracle we've been supporting retailers for many years. Many of us have worked within retail organisations all over the world, myself included. With this in mind, I don't feel it is too bold a statement to say that Oracle understands retail. We wouldn't be so heavily integrated in some of the biggest and most well-known names in retail if we didn't. With this blog, we intend to create a community of international retailers that can exchange ideas and experiences, debate collective challenges and drive a better understanding of this continually evolving industry. Events such as the World Retail Congress and NRF's Big Show bring enormous value to the retail industry providing platforms for discussion and learning but they happen once a year. We wanted to create a platform for discussion on a different level and that like retail, is always on. We hope not only to bring commitment to being not only the infrastructure that brings all of their systems together within a retail business, but an infrastructure that supports the industry internationally to grow and flourish through creating a platform for networking, discussion, creativity, vision and strategy. Please feel free to ask questions or comment using the comments functionality.  You might also want to visit our other Oracle Retail social media sites: Facebook - http://www.facebook.com/oracleretail YouTube - http://www.youtube.com/user/oracleretail Twitter - http://twitter.com/#!/oracleretailInsight-Driven Retailing Blog - http://blogs.oracle.com/retail/

    Read the article

  • Top 10 Linked Blogs of 2010

    - by Bill Graziano
    Each week I send out a SQL Server newsletter and include links to interesting blog posts.  I’ve linked to over 500 blog posts so far in 2010.  Late last year I started storing those links in a database so I could do a little reporting.  I tend to link to posts related to the OLTP engine.  I also try to link to the individual blogger in the group blogs.  Unfortunately that wasn’t possible for the SQLCAT and CSS blogs.  I also have a real weakness for posts related to PASS. These are the top 10 blogs that I linked to during the year ordered by the number of posts I linked to. Paul Randal – Paul writes extensively on the internals of the relational engine.  Lots of great posts around transactions, transaction log, disaster recovery, corruption, indexes and DBCC.  I also linked to many of his SQL Server myths posts. Glenn Berry – Glenn writes very interesting posts on how hardware affects SQL Server.  I especially like his posts on the various CPU platforms.  These aren’t necessarily topics that I’m searching for but I really enjoy reading them. The SQLCAT Team – This Microsoft team focuses on the largest and most interesting SQL Server installations.  The regularly publish white papers and best practices. SQL Server CSS Team – These are the top engineers from the Microsoft Customer Service and Support group.  These are the folks you finally talk to after your case has been escalated about 20 times.  They write about the interesting problems they find. Brent Ozar – The posts I linked to mostly focused on the relational engine: CPU, NUMA, SSD drives, performance monitoring, etc.  But Brent writes about a real variety of topics including blogging, social networking, speaking, the MCM, SQL Azure and anything else that seems to strike his fancy.  His posts are always well written and though provoking. Jeremiah Peschka – A number of Jeremiah’s posts weren’t about SQL Server.  He’s very active in the “NoSQL” area and I linked to a number of those posts.  I think it’s important for people to know what other technologies are out there. Brad McGehee – Brad writes about being a DBA including maintenance plans, DBA checklists, compression and audit. Thomas LaRock – I linked to a variety of posts from PBM to networking to 24 Hours of PASS to TDE.  Just a real variety of topics.  Tom always writes with an interesting style usually mixing in a movie theme and/or bacon. Aaron Bertrand – Many of my links this year were Denali features.  He also had a great series on bad habits to kick. Michael J. Swart – This last one surprised me.  There are some well known SQL Server bloggers below Michael on this list.  I linked to posts on indexes, hierarchies, transactions and I/O performance and a variety of other engine related posts.  All are interesting and well thought out.  Many of his non-SQL posts are also very good.  He seems to have an interest in puzzles and other brain teasers.  Michael, I won’t be surprised again!

    Read the article

  • Countdown of Top 10 Reasons to Never Ever Use a Pie Chart

    - by Tony Wolfram
      Pie charts are evil. They represent much of what is wrong with the poor design of many websites and software applications. They're also innefective, misleading, and innacurate. Using a pie chart as your graph of choice to visually display important statistics and information demonstrates either a lack of knowledge, laziness, or poor design skills. Figure 1: A floating, tilted, 3D pie chart with shadow trying (poorly)to show usage statistics within a graphics application.   Of course, pie charts in and of themselves are not evil. This blog is really about designers making poor decisions for all the wrong reasons. In order for a pie chart to appear on a web page, somebody chose it over the other alternatives, and probably thought they were doing the right thing. They weren't. Using a pie chart is almost always a bad design decision. Figure 2: Pie Chart from an Oracle Reports User Guide   A pie chart does not do the job of effectively displaying information in an elegant visual form.  Being circular, they use up too much space while not allowing their labels to line up. Bar charts, line charts, and tables do a much better job. Expert designers, statisticians, and business analysts have documented their many failings, and strongly urge software and report designers not to use them. It's obvious to them that the pie chart has too many inherent defects to ever be used effectively. Figure 3: Demonstration of how comparing data between multiple pie charts is difficult.   Yet pie charts are still used frequently in today's software applications, financial reports, and websites, often on the opening page as a symbol of how the data inside is represented. In an attempt to get a flashy colorful graphic to break up boring text, designers will often settle for a pie chart that looks like pac man, a colored spinning wheel, or a 3D floating alien space ship.     Figure 4: Best use of a pie chart I've found yet.   Why is the pie chart so popular? Through its constant use and iconic representation as the classic chart, the idea persists that it must be a good choice, since everyone else is still using it. Like a virus or an urban legend, no amount of vaccine or debunking will slow down the use of pie charts, which seem to be resistant to logic and common sense. Even the new iPad from Apple showcases the pie chart as one of its options.     Figure 5: Screen shot of new iPad showcasing pie charts. Regardless of the futility in trying to rid the planet of this often used poor design choice, I now present to you my top 10 reasons why you should never, ever user a pie chart again.    Number 10 - Pie Charts Just Don't Work When Comparing Data Number 9 - You Have A Better Option: The Sorted Horizontal Bar Chart Number 8 - The Pie Chart is Always Round Number 7 - Some Genius Will Make It 3D Number 6 - Legends and Labels are Hard to Align and Read Number 5 - Nobody Has Ever Made a Critical Decision Using a Pie Chart Number 4 - It Doesn't Scale Well to More Than 2 Items Number 3 - A Pie Chart Causes Distortions and Errors Number 2 - Everyone Else Uses Them: Debunking the "Urban Legend" of Pie Charts Number 1 - Pie Charts Make You Look Stupid and Lazy  

    Read the article

  • Use Case Actors - Primary versus Secondary

    - by Dave Burke
    The Unified Modeling Language (UML1) defines an Actor (from UseCases) as: An actor specifies a role played by a user or any other system that interacts with the subject. In Alistair Cockburn’s book “Writing Effective Use Cases” (2) Actors are further defined as follows: Primary Actor: The primary actor of a use case is the stakeholder that calls on the system to deliver one of its services. It has a goal with respect to the system – one that can be satisfied by its operation. The primary actor is often, but not always, the actor who triggers the use case. Supporting Actors: A supporting actor in a use case in an external actor that provides a service to the system under design. It might be a high-speed printer, a web service, or humans that have to do some research and get back to us. In a 2006 article (3) Cockburn refined the definitions slightly to read: Primary Actors: The Actor(s) using the system to achieve a goal. The Use Case documents the interactions between the system and the actors to achieve the goal of the primary actor. Secondary Actors: Actors that the system needs assistance from to achieve the primary actor’s goal. Finally, the Oracle Unified Method (OUM) concurs with the UML definition of Actors, along with Cockburn’s refinement, but OUM also includes the following: Secondary actors may or may not have goals that they expect to be satisfied by the use case, the primary actor always has a goal, and the use case exists to satisfy the primary actor. Now that we are on the same “page”, let’s consider two examples: A bank loan officer wants to review a loan application from a customer, and part of the process involves a real-time credit rating check. Use Case Name: Review Loan Application Primary Actor: Loan Officer Secondary Actors: Credit Rating System A Human Resources manager wants to change the job code of an employee, and as part of the process, automatically notify several other departments within the company of the change. Use Case Name: Maintain Job Code Primary Actor: Human Resources Manager Secondary Actors: None The first example is quite straight forward; we need to define the Secondary Actor because without the “Credit Rating System” we cannot successfully complete the Use Case. In other words, the goal of the Primary Actor is to successfully complete the Loan Application, but they need the explicit “help” of the Secondary Actor (Credit Rating System) to achieve this goal. The second example is where people sometimes get confused. Within OUM we would not include the “other departments” as Secondary Actors and therefore not include them on the Use Case diagram for the following reasons: The other departments are not required for the successful completion of the Use Case We are not expecting any response from the other departments (at least within the bounds of the Use Case under discussion) Having said that, within the detail of the Use Case Specification Main Success Scenario, we would include something like: “The system sends a notification to the related department heads (ref. Business Rule BR101)” Now let’s consider one final example. A Procurement Manager wants to place a “bid” for some goods using an On-Line Trading Community (B2B version of eBay) Use Case Name: Create Bid Primary Actor: Procurement Manager Secondary Actors: On-Line Trading Community You might wonder why the Trading Community is listed as a Secondary Actor, i.e. if all we are going to do is place a bid for a specific quantity of goods at a given price and send that off to the Trading Community, then why would the Trading Community need to “assist” in that Use Case? Well, once again, it comes back to the “User Experience” and how we want to optimize that when we think about our Use Case, and ultimately, when the developer comes to assembling some code. In this final example, the Procurement Manager cannot successfully complete the “Create Bid” Use Case until they receive an affirmative confirmation back from the Trading Community that the Bid has been accepted. Therefore, the Trading Community must become a Secondary Actor and be referenced both on the Use Case diagram and Use Case Specification. Any astute readers who are wondering about the “single sitting” rule will have to wait for a follow-up Blog entry to find out how that consideration can be factored in!!! Happy Use Case writing! (1) OMG Unified Modeling LanguageTM (OMG UML), Superstructure Version 2.4.1 (2) Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1 (3) Cockburn, A, 2006 “Use Case fundamentals” viewed 20th March 2012, http://alistair.cockburn.us/Use+case+fundamentals

    Read the article

  • Taking Your Business Scorecard Golfing

    - by tobyehatch
    Our workplace world is definitely changing. Not only are we taking work home, but we are working during odd hours in some very strange places.  I had the pleasure of interviewing Jacques Vigeant, Product Strategy Manager for Oracle Business Intelligence and Enterprise Performance Management, on a Podcast, and he enlightened me about how our mobile devices and business scorecards are enabling us to be more accountable and keep a watchful eye on business – even while on the golf course.Business scorecards have been around for many years - so I asked Jacques if he felt they had changed significantly due to technology. His answer was, “Yes, and no.”  Jacques agreed that scorecard enthusiasts are still passionate about executing the company strategy and monitoring Key Performance Indicators (KPIs), but scorecards and Business Intelligence (BI) as a whole have changed.  He explained that five to six years ago, people did BI work at the office and, for the most part, disconnected from their computer and workplace when they went home – with the exception of checking email and making a phone call or two. But now, that is no longer the case. People are virtually always connected with work and, more importantly, expect their BI and scorecards to be ‘always on,’ regardless of whether they are at their desk or somewhere else.Basically, the BI paradigm has changed from a 'pull' model, where employees are at their desks querying or pulling information from the system, to a 'push' model where employees expect their BI and scorecard systems to reach out (or push information) to them when there is something of note to learn or something on which they need to take action. I found this very interesting. However mobile devices do have their limitations with respect to screen sizes – does it really make sense to look at your strategy/scorecard on tiny devices? What kind of scorecard activities can you really expect to be able to do? Jacques’ answer was very logical. “When you think of a scorecard, it is really comprised of an organization of KPIs that are aligned with the strategic objectives of your company. KPIs are the heart of how you will execute your strategy. So, if you decompose that a little more, each KPI is well defined with the thresholds that you should keep an eye on and who is responsible for them. When we talk about scorecarding on a phone, we aren’t talking about surfing the strategy and exploring the strategy map like we do on the desktop. In a scorecarding context, we use the phone more as an alerting mechanism or simple monitoring device for your KPIs.”Jacques gave a great example of an inventory manager who took part of an afternoon off to go golfing before winter finally hit, and while on the front nine holes, his phone vibrated. His scorecard was alerting him that the inventory levels for one of the products was below some threshold that he had set.  From his phone, he had set up three options within Oracle Scorecard and Strategy Management (OSSM) for this type of situation:  1. Contact the warehouse manager directly by phone and work it out (standard phone function)  2. Tap/hold the KPI and add an annotation to the KPI in OSSM using the dictation capabilities of the phone and deal with it more fully when he gets back to the office  3. Tap/hold the KPI and invoke a business process from OSSM to transfer product from another warehouse with higher stock levels to the one that needs it  Being on a phone should still give you options to quickly deal with situations as needed, but mobile phones are not designed for nor should try to replicate the full desktop experience. We covered other interesting subjects in the interview, including how Oracle is keeping pace with mobile innovation and new devices such as Google Glasses, Galaxy Gear, Pebble Watches and more, and how Oracle is handling mobile security– which is great news for our mobile workforce. To listen to the entire Podcast, click here.To learn more about Oracle Scorecard and Strategy Management, click here.

    Read the article

  • SQLAuthority News – Who I Am And How I Got Here – True Story as Blog Post

    - by pinaldave
    Here are few of the sample questions I get every day? “Give me shortcut to become superstar?” “How do I become like you?” “Which book I should read so I know everything?” “Can you share your secret to be successful? I want to know it but do not share with others.” There is generic answer I always give is to work hard and read good educational material or watch good online videos. One of the emails really caught my attention. It was from a friend and SQL Server Expert John Sansom (Blog | Twitter). He wrote if I would like to share my story with the world about “Who I am and How I got Here”. I was very much intrigued with his suggestion. John is one guy I respect a lot. Every single topic he writes, I read it with dedication. I eagerly wait for his Weekly Summary of Best SQL Links. If you have not read them, you are missing something out. Writing a guest post for him was like walking in memory lane. I remembered the time when I was beginning my career and I was bit overconfident and bit naive. I had my share of mistakes when I started my career. As time passed by I realize the truth. Well, we all do mistakes. Though, I am proud that as soon as I know my mistakes I corrected them. I never acted on impulse or when I am angry. I think that alone has helped me analysis the situation better and become better human being. During the course, I have lost my ego and it is replaced by passion. I am much more happy and successful in my work. Quite often people ask me if I am always online and wether I have family or not. Honestly, I am able to work hard because of my family. They support me and they encourage me to be enjoy in what I do. They support everything I do and personally, I do not miss a single occasion to join them in daily chores of fun. If there was a shortcut to success – I want know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me with my journey to learn SQL Server – more the merrier. I have written a story of my life as a guest post.  Read Here: A Journey to SQL Authority Special thanks to John Sansom (Blog | Twitter) for giving me space to talk my story. Indeed I am honored. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Create and Track Your Own License Keys with PowerShell

    - by BuckWoody
    SQL Server used to have  cool little tool that would let you track your licenses. Microsoft didn’t use it to limit your system or anything, it was just a place on the server where you could put that this system used this license key. I miss those days – we don’t track that any more, and I want to make sure I’m up to date on my licensing, so I made my own. Now, there are a LOT of ways you could do this. You could add an extended property in SQL Server, add a table to a tracking database, use a text file, track it somewhere else, whatever. This is just the route I chose; if you want to use some other method, feel free. Just sharing here. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. And this is REALLY important. I include a disclaimer at the end of my scripts, but in this case you’re modifying your registry, and that could be EXTREMELY dangerous – only do this on a test server – and I’m just showing you how I did mine. It isn’t an endorsement or anything like that, and this is a “Buck Woody” thing, NOT a Microsoft thing. See this link first, and then you can read on. OK, here’s my script: # Track your own licenses # Write a New Key to be the License Location mkdir HKCU:\SOFTWARE\Buck   # Write the variables - one sets the type, the other sets the number, and the last one holds the key New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseType" -value "Processor" # Notice the Dword value here - this one is a number so it needs that. Keep this on one line! New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseNumber" -propertytype DWord -value 4 New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseKey" -value "ABCD1234"   # Read them all $LicenseKey = Get-Item HKCU:\Software\Buck $Licenses = Get-ItemProperty $LicenseKey.PSPath foreach ($License in $LicenseKey.Property) { $License + "=" + $Licenses.$License }   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • My Experience at Oracle !!! By Ayush Gupta

    - by Nadiya
    Normal 0 false false false EN-US X-NONE X-NONE Hi! My name is Ayush, a Gratuate from BITS Pilani and now working and living in Bangalore. I joined Oracle in August 2013 as a Senior Consultant (SC) and would like to share my experiences over the first couple of months with you.It has been a wonderful journey so far. The last two months have been very exciting for me. First of all I would like to mention that the training program at Oracle that we went through really prepared us well. It matured us and allowed us to go from developing small applications in college to big enterprise products. Two months of initial training has had a lasting impact for me. I am also really enjoying the knowledge I have gained so far and also learning new things in the form of product training. It's really fun to work here. We are treated like adults and we are responsible for our own workloads.With that I can't keep from mentioning the fun times we as a team have in the form of Young Leadership programme in Hotel Fortune Trinity which included Luxurious buffet lunch too. Wishing it could happen more frequently.  Oracle provides one of the best opportunities to learn various technologies across different platforms. What I like best about working at Oracle is the work life balance. With the option of flexible timings, one can easily enjoy planned evenings with friends or maybe working out at the fitness centre in your building. Be it the birthday celebrations at office or the day long team outing at a resort, It’s all together a different experience. Overall, you get to take full ownership of your project and they give you a free leash on how you design your enhancements/changes.As one of the largest international companies, Oracle is obviously an expert on exploring the potential and possibility of inexperienced new hires. We were taught how to make an outstanding team work in a group training session at the first few weeks. From this experience I realized that perfect cooperation is not about where you come from or what your study background is, everyone can find his or her own role to support the team. Even though I am not that skilled in technology, my background has significantly helped me in learning new technologies in Oracle.My idea and suggestion is: for new joiners, the will to learn is be more important than what you have learnt before. Colleagues here at Oracle are professionals in their field, always friendly and glad to help. So don’t worry, all you need to do is just be confident, and have a nice attitude, Oracle will let you fully display your talent. Come and join us, here you can always find a tailor made role for you! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >