Search Results

Search found 6068 results on 243 pages for 'goal tracking'.

Page 39/243 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Prevent Click Fraud in Advertisement system with PHP and Javascript

    - by CodeDevelopr
    I would like to build an Advertising project with PHP, MySQL, and Javascript. I am talking about something like... Google Adsense BuySellAds.com Any other advertising platform My question is mainly, what do I need to look out for to prevent people cheating the system and any other issues I may encounter? My design concept. An Advertisement is a record in the Database, when a page is loaded, using Javascript, it calls my server which in turn will use a PHP script to query the Database and get a random Advertisement. (It may do kore like get an ad based on demographics or other criteria as well) The PHP script will then return the Advertisement to the server/website that is calling it and show it on the page as an Image that will have a special tracking link. I will need to... Count all impressions (when the Advertisement is shown on the page) Count all clicks on the Advertisement link Count all Unique clicks on the Advertisement link My question is purely on the query and displaying of the Advertisement and nothing to do with the administration side. If there is ever money involved with my Advertisement buying/selling of adspace, then the stats need to be accurate and make sure people can't easily cheat the system. Is tracking IP address really the only way to try to prevent click fraud? I am hoping someone with some experience can clarify I am on the right track? As well as give me any advice, tips, or anything else I should know about doing something like this?

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Install Base Transaction Error Troubleshooting

    - by LuciaC
    Oracle Installed Base is an item instance life cycle tracking application that facilitates enterprise-wide life cycle item management and tracking capability.In a typical process flow a sales order is created and shipped, this updates Inventory and creates a new item instance in Install Base (IB).  The Inventory update results in a record being placed in the SFM Event Queue.  If the record is successfully processed the IB tables are updated, if there is an error the record is placed in the csi_txn_errors table and the error needs to be resolved so that the IB instance can be created.It's extremely important to be proactive and monitor IB Transaction Errors regularly.  Errors cascade and can build up exponentially if not resolved. Due to this cascade effect, error records need to be considered as a whole and not individually; the root cause of any error needs to be resolved first and this may result in the subsequent errors resolving themselves. Install Base Transaction Error Diagnostic Program In the past the IBtxnerr.sql script was used to diagnose transaction errors, this is now replaced by an enhanced concurrent program version of the script. See the following note for details of how to download, install and run the concurrent program as well as details of how to interpret the results: Doc ID 1501025.1 - Install Base Transaction Error Diagnostic Program  The program provides comprehensive information about the errors found as well as links to known knowledge articles which can help to resolve the specific error. Troubleshooting Watch the replay of the 'EBS CRM: 11i and R12 Transaction Error Troubleshooting - an Overview' webcast or download the presentation PDF (go to Doc ID 1455786.1 and click on 'Archived 2011' tab).  The webcast and PDF include more information, including SQL statements that you can use to identify errors and their sources as well as recommended setup and troubleshooting tips. Refer to these notes for comprehensive information: Doc ID 1275326.1: E-Business Oracle Install Base Product Information Center Doc ID 1289858.1: Install Base Transaction Errors Master Repository Doc ID: 577978.1: Troubleshooting Install Base Errors in the Transaction Errors Processing Form  Don't forget your Install Base Community where you can ask questions to help you resolve your IB transaction errors.

    Read the article

  • 724% Return on an SFA project with Oracle Sales Cloud and Marketing Cloud combined!

    - by Richard Lefebvre
    Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that?a 724% return on investment (ROI) when it implemented these Oracle Cloud solutions in its fast-moving, rapidly-growing business. Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm, highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment – 724% Payback – 2 months Average annual benefit – $91,534 Cost : Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Read the full Apex IT ROI Case Study. You also can learn more about Apex IT’s business, including the company’s work with Oracle Sales and Marketing Cloud on behalf of its clients. You can point your prospects and customers to the CX blog for a similar recap of the Apex IT award and a link to the Case Study.

    Read the article

  • PASS: Bylaw Changes

    - by Bill Graziano
    While you’re reading this, a post should be going up on the PASS blog on the plans to change our bylaws.  You should be able to find our old bylaws, our proposed bylaws and a red-lined version of the changes.  We plan to listen to feedback until March 31st.  At that point we’ll decide whether to vote on these changes or take other action. The executive summary is that we’re adding a restriction to prevent more than two people from the same company on the Board and eliminating the Board’s Officer Appointment Committee to have Officers directly elected by the Board.  This second change better matches how officer elections have been conducted in the past. The Gritty Details Our scope was to change bylaws to match how PASS actually works and tackle a limited set of issues.  Changing the bylaws is hard.  We’ve been working on these changes since the March board meeting last year.  At that meeting we met and talked through the issues we wanted to address.  In years past the Board has tried to come up with language and then we’ve discussed and negotiated to get to the result.  In March, we gave HQ guidance on what we wanted and asked them to come up with a starting point.  Hannes worked on building us an initial set of changes that we could work our way through.  Discussing changes like this over email is difficult wasn’t very productive.  We do a much better job on this at the in-person Board meetings.  Unfortunately there are only 2 or 3 of those a year. In August we met in Nashville and spent time discussing the changes.  That was also the day after we released the slate for the 2010 election. The discussion around that colored what we talked about in terms of these changes.  We talked very briefly at the Summit and again reviewed and revised the changes at the Board meeting in January.  This is the result of those changes and discussions. We made numerous small changes to clean up language and make wording more clear.  We also made two big changes. Director Employment Restrictions The first is that only two people from the same company can serve on the Board at the same time.  The actual language in section VI.3 reads: A maximum of two (2) Directors who are employed by, or who are joint owners or partners in, the same for-profit venture, company, organization, or other legal entity, may concurrently serve on the PASS Board of Directors at any time. The definition of “employed” is at the sole discretion of the Board. And what a mess this turns out to be in practice.  Our membership is a hodgepodge of interlocking relationships.  Let’s say three Board members get together and start a blog service for SQL Server bloggers.  It’s technically for-profit.  Let’s assume it makes $8 in the first year.  Does that trigger this clause?  (Technically yes.)  We had a horrible time trying to write language that covered everything.  All the sample bylaws that we found were just as vague as this. That led to the third clause in this section.  The first sentence reads: The Board of Directors reserves the right, strictly on a case-by-case basis, to overrule the requirements of Section VI.3 by majority decision for any single Director’s conflict of employment. We needed some way to handle the trivial issues and exercise some judgment.  It seems like a public vote is the best way.  This discloses the relationship and gets each Board member on record on the issue.   In practice I think this clause will rarely be used.  I think this entire section will only be invoked for actual employment issues and not for small side projects.  In either case we have the mechanisms in place to handle it in a public, transparent way. That’s the first and third clauses.  The second clause says that if your situation changes and you fall afoul of this restriction you need to notify the Board.  The clause further states that if this new job means a Board members violates the “two-per-company” rule the Board may request their resignation.  The Board can also  allow the person to continue serving with a majority vote.  I think this will also take some judgment.  Consider a person switching jobs that leads to three people from the same company.  I’m very likely to ask for someone to resign if all three are two weeks into a two year term.  I’m unlikely to ask anyone to resign if one is two weeks away from ending their term.  In either case, the decision will be a public vote that we can be held accountable for. One concern that was raised was whether this would affect someone choosing to accept a job.  I think that’s a choice for them to make.  PASS is clearly stating its intent that only two directors from any one organization should serve at any time.  Once these bylaws are approved, this policy should not come as a surprise to any potential or current Board members considering a job change.  This clause isn’t perfect.  The biggest hole is business relationships that aren’t defined above.  Let’s say that two employees from company “X” serve on the Board.  What happens if I accept a full-time consulting contract with that company?  Let’s assume I’m working directly for one of the two existing Board members.  That doesn’t violate section VI.3.  But I think it’s clearly the kind of relationship we’d like to prevent.  Unfortunately that was even harder to write than what we have now.  I fully expect that in the next revision of the bylaws we’ll address this.  It just didn’t make it into this one. Officer Elections The officer election process received a slightly different rewrite.  Our goal was to codify in the bylaws the actual process we used to elect the officers.  The officers are the President, Executive Vice-President (EVP) and Vice-President of Marketing.  The Immediate Past President (IPP) is also an officer but isn’t elected.  The IPP serves in that role for two years after completing their term as President.  We do that for continuity’s sake.  Some organizations have a President-elect that serves for one or two years.  The group that founded PASS chose to have an IPP. When I started on the Board, the Nominating Committee (NomCom) selected the slate for the at-large directors and the slate for the officers.  There was always one candidate for each officer position.  It wasn’t really an election so much as the NomCom decided who the next person would be for each officer position.  Behind the scenes the Board worked to select the best people for the role. In June 2009 that process was changed to bring it line with what actually happens.  An Officer Appointment Committee was created that was a subset of the Board.  That committee would take time to interview the candidates and present a slate to the Board for approval.  The majority vote of the Board would determine the officers for the next two years.  In practice the Board itself interviewed the candidates and conducted the elections.  That means it was time to change the bylaws again. Section VII.2 and VII.3 spell out the process used to select the officers.  We use the phrase “Officer Appointment” to separate it from the Director election but the end result is that the Board elects the officers.  Section VII.3 starts: Officers shall be appointed bi-annually by a majority of all the voting members of the Board of Directors. Everything else revolves around that sentence.  We use the word appoint but they truly are elected.  There are details in the bylaws for term limits, minimum requirements for President (1 prior term as an officer), tie breakers and filling vacancies. In practice we will have an election for President, then an election for EVP and then an election for VP Marketing.  That means that losing candidates will be able to fall down the ladder and run for the next open position.  Another point to note is that officers aren’t at-large directors.  That means if a current sitting officer loses all three elections they are off the Board.  Having Board member votes public will help with the transparency of this approach. This process has a number of positive and negatives.  The biggest concern I expect to hear is that our members don’t directly choose the officers.  I’m going to try and list all the positives and negatives of this approach. Many non-profits value continuity and are slower to change than a business.  On the plus side this promotes that.  On the negative side this promotes that.  If we change too slowly the members complain that we aren’t responsive.  If we change too quickly we make mistakes and fail at various things.  We’ve been criticized for both of those lately so I’m not entirely sure where to draw the line.  My rough assumption to this point is that we’re going too slow on governance and too quickly on becoming “more than a Summit.”  This approach creates competition in the officer elections.  If you are an at-large director there is no consequence to losing an election.  If you are an officer the only way to stay on the Board is to win an officer election or an at-large election.  If you are an officer and lose an election you can always run for the next office down.  This makes it very easy for multiple people to contest an election. There is value in a person moving through the officer positions up to the Presidency.  Having the Board select the officers promotes this.  The down side is that it takes a LOT of time to get to the Presidency.  We’ve had good people struggle with burnout.  We’ve had lots of discussion around this.  The process as we’ve described it here makes it possible for someone to move quickly through the ranks but doesn’t prevent people from working their way up through each role. We talked long and hard about having the officers elected by the members.  We had a self-imposed deadline to complete these changes prior to elections this summer. The other challenge was that our original goal was to make the bylaws reflect our actual process rather than create a new one.  I believe we accomplished this goal. We ran out of time to consider this option in the detail it needs.  Having member elections for officers needs a number of problems solved.  We would need a way for candidates to fall through the election.  This is what promotes competition.  Without this few people would risk an election and we’ll be back to one candidate per slot.  We need to do this without having multiple elections.  We may be able to copy what other organizations are doing but I was surprised at how little I could find on other organizations.  We also need a way for people that lose an officer election to win an at-large election.  Otherwise we’ll have very little competition for officers. This brings me to an area that I think we as a Board haven’t done a good job.  We haven’t built a strong process to tell you who is doing a good job and who isn’t.  This is a double-edged sword.  I don’t want to highlight Board members that are failing.  That’s not a good way to get people to volunteer and run for the Board.  But I also need a way let the members make an informed choice about who is doing a good job and would make a good officer.  Encouraging Board members to blog, publishing minutes and making votes public helps in that regard but isn’t the final answer.  I don’t know what the final answer is yet.  I do know that the Board members themselves are uniquely positioned to know which other Board members are doing good work.  They know who speaks up in meetings, who works to build consensus, who has good ideas and who works with the members.  What I Could Do Better I’ve learned a lot writing this about how we communicated with our members.  The next time we revise the bylaws I’d do a few things differently.  The biggest change would be to provide better documentation.  The March 2009 minutes provide a very detailed look into what changes we wanted to make to the bylaws.  Looking back, I’m a little surprised at how closely they matched our final changes and covered the various arguments.  If you just read those you’d get 90% of what we eventually changed.  Nearly everything else was just details around implementation.  I’d also consider publishing a scope document defining exactly what we were doing any why.  I think it really helped that we had a limited, defined goal in mind.  I don’t think we did a good job communicating that goal outside the meeting minutes though. That said, I wish I’d blogged more after the August and January meeting.  I think it would have helped more people to know that this change was coming and to be ready for it. Conclusion These changes address two big concerns that the Board had.  First, it prevents a single organization from dominating the Board.  Second, it codifies and clearly spells out how officers are elected.  This is the process that was previously followed but it was somewhat murky.  These changes bring clarity to this and clearly explain the process the Board will follow. We’re going to listen to feedback until March 31st.  At that time we’ll decide whether to approve these changes.  I’m also assuming that we’ll start another round of changes in the next year or two.  Are there other issues in the bylaws that we should tackle in the future?

    Read the article

  • Forensic Analysis of the OOM-Killer

    - by Oddthinking
    Ubuntu's Out-Of-Memory Killer wreaked havoc on my server, quietly assassinating my applications, sendmail, apache and others. I've managed to learn what the OOM Killer is, and about its "badness" rules. While my machine is small, my applications are even smaller, and typically only half of my physical memory is in use, let alone swap-space, so I was surprised. I am trying to work out the culprit, but I don't know how to read the OOM-Killer logs. Can anyone please point me to a tutorial on how to read the data in the logs (what are ve, free and gen?), or help me parse these logs? Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 1, exc 2326 0 goal 2326 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 1, exc 2326 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 2, exc 122 0 goal 383 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 2, exc 383 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 2 Apr 20 20:03:27 EL135 kernel: OOM killed process watchdog (pid=14490, ve=13516) exited, free=43104 gen=24501. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=4457, ve=13516) exited, free=43104 gen=24502. Apr 20 20:03:27 EL135 kernel: OOM killed process ntpd (pid=10816, ve=13516) exited, free=43104 gen=24503. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=27401, ve=13516) exited, free=43104 gen=24504. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29009, ve=13516) exited, free=43104 gen=24505. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=10557, ve=13516) exited, free=49552 gen=24506. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=24983, ve=13516) exited, free=53117 gen=24507. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=29129, ve=13516) exited, free=68493 gen=24508. Apr 20 20:03:27 EL135 kernel: OOM killed process sendmail-mta (pid=941, ve=13516) exited, free=68803 gen=24509. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=12418, ve=13516) exited, free=69330 gen=24510. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=22953, ve=13516) exited, free=72275 gen=24511. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=6624, ve=13516) exited, free=76398 gen=24512. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=23317, ve=13516) exited, free=94285 gen=24513. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29030, ve=13516) exited, free=95339 gen=24514. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=20583, ve=13516) exited, free=101663 gen=24515. Apr 20 20:03:28 EL135 kernel: OOM killed process logger (pid=12894, ve=13516) exited, free=101694 gen=24516. Apr 20 20:03:28 EL135 kernel: OOM killed process bash (pid=21119, ve=13516) exited, free=101849 gen=24517. Apr 20 20:03:28 EL135 kernel: OOM killed process atd (pid=991, ve=13516) exited, free=101880 gen=24518. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=14649, ve=13516) exited, free=102748 gen=24519. Apr 20 20:03:28 EL135 kernel: OOM killed process grep (pid=21375, ve=13516) exited, free=132167 gen=24520. Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 4, exc 4215 0 goal 4826 0... Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 1 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 4, exc 4826 0 red 189481 331 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 2 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 5, exc 3564 0 goal 3564 0... Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 1 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 5, exc 3564 0 red 189481 331 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 2 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 6, exc 8071 0 goal 8071 0... Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 1 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 6, exc 8071 0 red 189481 331 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 2 Watchdog is a watchdog task, that was idle; nothing in the logs to suggest it had done anything for days. Its job is to restart one of the applications if it dies, so a bit ironic that it is the first to get killed. Tail was monitoring a few logs files. Unlikely to be consuming memory madly. The apache web-server only serves pages to a little old lady who only uses it to get to church on Sundays a couple of developers who were in bed asleep, and hadn't visited a page on the site for a few weeks. The only traffic it might have had is from the port-scanners; all the content is password-protected and not linked from anywhere, so no spiders are interested. Python is running two separate custom applications. Nothing in the logs to suggest they weren't humming along as normal. One of them was a relatively recent implementation, which makes suspect #1. It doesn't have any data-structures of any significance, and normally uses only about 8% of the total physical RAW. It hasn't misbehaved since. The grep is suspect #2, and the one I want to be guilty, because it was a once-off command. The command (which piped the output of a grep -r to another grep) had been started at least 30 minutes earlier, and the fact it was still running is suspicious. However, I wouldn't have thought grep would ever use a significant amount of memory. It took a while for the OOM killer to get to it, which suggests it wasn't going mad, but the OOM killer stopped once it was killed, suggesting it may have been a memory-hog that finally satisfied the OOM killer's blood-lust.

    Read the article

  • SQL SERVER – Video – Beginning Performance Tuning with SQL Server Execution Plan

    - by pinaldave
    Traveling can be most interesting or most exhausting experience. However, traveling is always the most enlightening experience one can have. While going to long journey one has to prepare a lot of things. Pack necessary travel gears, clothes and medicines. However, the most essential part of travel is the journey to the destination. There are many variations one prefer but the ultimate goal is to have a delightful experience during the journey. Here is the video available which explains how to begin with SQL Server Execution plans. Performance Tuning is a Journey Performance tuning is just like a long journey. The goal of performance tuning is efficient and least resources consuming query execution with accurate results. Just as maps are the most essential aspect of performance tuning the same way, execution plans are essentially maps for SQL Server to reach to the resultset. The goal of the execution plan is to find the most efficient path which translates the least usage of the resources (CPU, memory, IO etc). Execution Plans are like Maps When online maps were invented (e.g. Bing, Google, Mapquests etc) initially it was not possible to customize them. They were given a single route to reach to the destination. As time evolved now it is possible to give various hints to the maps, for example ‘via public transport’, ‘walking’, ‘fastest route’, ‘shortest route’, ‘avoid highway’. There are places where we manually drag the route and make it appropriate to our needs. The same situation is with SQL Server Execution Plans, if we want to tune the queries, we need to understand the execution plans and execution plans internals. We need to understand the smallest details which relate to execution plan when we our destination is optimal queries. Understanding Execution Plans The biggest challenge with maps are figuring out the optimal path. The same way the  most common challenge with execution plans is where to start from and which precise route to take. Here is a quick list of the frequently asked questions related to execution plans: Should I read the execution plans from bottoms up or top down? Is execution plans are left to right or right to left? What is the relational between actual execution plan and estimated execution plan? When I mouse over operator I see CPU and IO but not memory, why? Sometime I ran the query multiple times and I get different execution plan, why? How to cache the query execution plan and data? I created an optimal index but the query is not using it. What should I change – query, index or provide hints? What are the tools available which helps quickly to debug performance problems? Etc… Honestly the list is quite a big and humanly impossible to write everything in the words. SQL Server Performance:  Introduction to Query Tuning My friend Vinod Kumar and I have created for the same a video learning course for beginning performance tuning. We have covered plethora of the subject in the course. Here is the quick list of the same: Execution Plan Basics Essential Indexing Techniques Query Design for Performance Performance Tuning Tools Tips and Tricks Checklist: Performance Tuning We believe we have covered a lot in this four hour course and we encourage you to go over the video course if you are interested in Beginning SQL Server Performance Tuning and Query Tuning. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Execution Plan

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • The Power of Goals

    - by BuckWoody
    Every year we read blogs, articles, magazines, hear news stories and blurbs on making New Year’s Resolutions. Well, I for one don’t do that. I do something else. Each year, on January 1, my wife, daughter and I get up early - like before 6:00 A.M. - and find a breakfast place that’s open. When I used to live in Safety Harbor, Florida, that was the “Paradise Café”, which has some of the best waffles around…but I digress. We find that restaurant and have a great breakfast while everyone else is recuperating from the night before. And we bring along a worn leather book that we’ve been writing in since my daughter wasn’t even old enough to read. It’s our book of Goals. A resolution, as it is purely defined, is a decision to change, stop or start an action. It has a sense of continuance, and that’s the issue. Some people decide things like “I’m going to lose weight” or “I’m going to spend more time with my family or hobby”. But a goal is different. A goal tends to have a defined start and end point. It’s something that can be measured. So each year on January 1 we sit down with the little leather book and we make a few - and only a few - individual and family goals. Sometimes it’s to exercise three times a week at the gym, sometimes it’s to save a certain percentage of income, and sometimes it’s to give away some of our possessions or to help someone we know in a specific way. Each person is responsible for their own goals - coming up with them, and coming up with a plan to meet them. Then we write it down in the little leather book. But it doesn’t end there. Each month, we grab the little leather book and read out the goals from that year to each person with a question or two: How are you doing on your goal? And what are you doing about reaching it? Can I help? Am I helping? At the end of the year, we put a checkmark by the goals we reached, and an X by the ones we didn’t. There’s no judgment, there’s no statements, each person is just expected to handle the success or failure in their own way. We also have family goals, and those we work on together. This might seem a little “corny” to some people. “I don’t need to write goals down” they say, “I keep track in my head of the things I do all the time. That’s silly.” But let me give you a little challenge: find a book, get with your family, and write down the things you want to do by the next January 1. Each month, look at the book. You can make goals for your career, your education, your spiritual side, your family, whatever. But if you make your goals realistic, think them through, and think about how you will achieve them, you will be surprised by the power of written goals.

    Read the article

  • jqGrid - dynamically load different drop down values for different rows depending on another column value

    - by Renso
    Goal: As we all know the jqGrid examples in the demo and the Wiki always refer to static values for drop down boxes. This of course is a personal preference but in dynamic design these values should be populated from the database/xml file, etc, ideally JSON formatted. Can you do this in jqGrid, yes, but with some custom coding which we will briefly show below (refer to some of my other blog entries for a more detailed discussion on this topic). What you CANNOT do in jqGrid, referrign here up and to version 3.8.x, is to load different drop down values for different rows in the jqGrid. Well, not without some trickery, which is what this discussion is about. Issue: Of course the issue is that jqGrid has been designed for high performance and thus I have no issue with them loading a  reference to a single drop down values list for every column. This way if you have 500 rows or one, each row only refers to a single list for that particuolar column. Nice! SO how easy would it be to simply traverse the grid once loaded on gridComplete or loadComplete and simply load the select tag's options from scratch, via ajax, from memory variable, hard coded etc? Impossible! Since their is no embedded SELECT tag within each cell containing the drop down values (remeber it only has a reference to that list in memory), all you will see when you inspect the cell prior to clicking on it, or even before and on beforeEditCell, is an empty <TD></TD>. When trying to load that list via a click event on that cell will temporarily load the list but jqGrid's last internal callback event will remove it and replace it with the old one, and you are back to square one. Solution: Yes, after spending a few hours on this found a solution to the problem that does not require any updates to jqGrid source code, thank GOD! Before we get into the coding details, the solution here can of course be customized to suite your specific needs, this one loads the entire drop down list that would be needed across all rows once into global variable. I then parse this object that contains all the properties I need to filter the rows depending on which ones I want the user to see based off of another cell value in that row. This only happens when clicking the cell, so no performance penalty. You may of course to load it via ajax when the user clicks the cell, but I found it more effecient to load the entire list as part of jqGrid's normal editoptions: { multiple: false, value: listingStatus } colModel options which again keeps only a reference to the sinlge list, no duplciation. Lets get into the meat and potatoes of it.         var acctId = $('#Id').val();         var data = $.ajax({ url: $('#ajaxGetAllMaterialsTrackingLookupDataUrl').val(), data: { accountId: acctId }, dataType: 'json', async: false, success: function(data, result) { if (!result) alert('Failure to retrieve the Alert related lookup data.'); } }).responseText;         var lookupData = eval('(' + data + ')');         var listingCategory = lookupData.ListingCategory;         var listingStatus = lookupData.ListingStatus;         var catList = '{';         $(lookupData.ListingCategory).each(function() {             catList += this.Id + ':"' + this.Name + '",';         });         catList += '}';         var lastsel;         var ignoreAlert = true;         $(item)         .jqGrid({             url: listURL,             postData: '',             datatype: "local",             colNames: ['Id', 'Name', 'Commission<br />Rep', 'Business<br />Group', 'Order<br />Date', 'Edit', 'TBD', 'Month', 'Year', 'Week', 'Product', 'Product<br />Type', 'Online/<br />Magazine', 'Materials', 'Special<br />Placement', 'Logo', 'Image', 'Text', 'Contact<br />Info', 'Everthing<br />In', 'Category', 'Status'],             colModel: [                 { name: 'Id', index: 'Id', hidden: true, hidedlg: true },                 { name: 'AccountName', index: 'AccountName', align: "left", resizable: true, search: true, width: 100 },                 { name: 'OnlineName', index: 'OnlineName', align: 'left', sortable: false, width: 80 },                 { name: 'ListingCategoryName', index: 'ListingCategoryName', width: 85, editable: true, hidden: false, edittype: "select", editoptions: { multiple: false, value: eval('(' + catList + ')') }, editrules: { required: false }, formatoptions: { disabled: false} }             ],             jsonReader: {                 root: "List",                 page: "CurrentPage",                 total: "TotalPages",                 records: "TotalRecords",                 userdata: "Errors",                 repeatitems: false,                 id: "0"             },             rowNum: $rows,             rowList: [10, 20, 50, 200, 500, 1000, 2000],             imgpath: jQueryImageRoot,             pager: $(item + 'Pager'),             shrinkToFit: true,             width: 1455,             recordtext: 'Traffic lines',             sortname: 'OrderDate',             viewrecords: true,             sortorder: "asc",             altRows: true,             cellEdit: true,             cellsubmit: "remote",             cellurl: editURL + '?rows=' + $rows + '&page=1',             loadComplete: function() {               },             gridComplete: function() {             },             loadError: function(xhr, st, err) {             },             afterEditCell: function(rowid, cellname, value, iRow, iCol) {                 var select = $(item).find('td.edit-cell select');                 $(item).find('td.edit-cell select option').each(function() {                     var option = $(this);                     var optionId = $(this).val();                     $(lookupData.ListingCategory).each(function() {                         if (this.Id == optionId) {                                                       if (this.OnlineName != $(item).getCell(rowid, 'OnlineName')) {                                 option.remove();                                 return false;                             }                         }                     });                 });             },             search: true,             searchdata: {},             caption: "List of all Traffic lines",             editurl: editURL + '?rows=' + $rows + '&page=1',             hiddengrid: hideGrid   Here is the JSON data returned via the ajax call during the jqGrid function call above (NOTE it must be { async: false}: {"ListingCategory":[{"Id":29,"Name":"Document Imaging & Management","OnlineName":"RF Globalnet"} ,{"Id":1,"Name":"Ancillary Department Hardware","OnlineName":"Healthcare Technology Online"} ,{"Id":2,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":3,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":4,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":5,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":6,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":7,"Name":"EMR/EHR Software","OnlineName":"Healthcare Technology Online"}]} I only need the Id and Name for the drop down list, but the third column in the JSON object is important, it is the only that I match up with the OnlineName in the jqGrid column, and then in the loop during afterEditCell simply remove the ones I don't want the user to see. That's it!

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Maven WTP webapp with maven-eclipse-plugin but without m2eclipse

    - by chrsk
    I use the eclipse:eclipse goal to generate an Eclipse Project environment. The deployment works fine. The goal creates the var classpath entries for all needed dependencies. With m2eclipse there was the Maven Container which defines an export folder which was WEB-INF/lib for me. But i don't want to rely on m2eclipse so i don't want to use it anymore. the class path entries which are generated by eclipse:eclipse goal don't have such a export folder. While booting the servlet container with WTP it publishes all resources and classes except the libraries to the context. Whats missing to publish the needed libs, or isn't that possible without m2eclipse integration? IDE Configuration Eclipse 3.5 JEE Galileo Maven 2 m2eclipse The maven-eclipse-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> <configuration> <projectNameTemplate>opensaga-[artifactId]</projectNameTemplate> <useProjectReferences>false</useProjectReferences> <downloadSources>false</downloadSources> <downloadJavadocs>false</downloadJavadocs> <wtpmanifest>true</wtpmanifest> <wtpversion>2.0</wtpversion> <wtpapplicationxml>true</wtpapplicationxml> <wtpContextName>opensaga-[artifactId]</wtpContextName> <additionalProjectFacets> <jst.java>5.0</jst.java> <jst.web>2.3</jst.web> </additionalProjectFacets> </configuration> </plugin>

    Read the article

  • testing Clojure in Maven

    - by Ralph
    I am new at Maven and even newer at Clojure. As an exercise to learn the language, I am writing a spider solitaire player program. I also plan on writing a similar program in Scala to compare the implementations (see my post http://stackoverflow.com/questions/2571267/modern-java-alternatives-closed). I have configured a Maven directory structure containing the usual src/main/clojure and src/test/clojure directories. My pom.xml file includes the clojure-maven-plugin. When I run "mvn test", it displays "No tests to run", despite my having test code in the src/test/clojure directory. As I misnaming something? Here is my pom.xml file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>SpiderPlayer</groupId> <artifactId>SpiderPlayer</artifactId> <version>1.0.0-SNAPSHOT</version> <inceptionYear>2010</inceptionYear> <packaging>jar</packaging> <properties> <maven.build.timestamp.format>yyMMdd.HHmm</maven.build.timestamp.format> <main.dir>org/dogdaze/spider_player</main.dir> <main.package>org.dogdaze.spider_player</main.package> <main.class>${main.package}.Main</main.class> </properties> <build> <sourceDirectory>src/main/clojure</sourceDirectory> <testSourceDirectory>src/main/clojure</testSourceDirectory> <plugins> <plugin> <groupId>com.theoryinpractise</groupId> <artifactId>clojure-maven-plugin</artifactId> <version>1.3.1</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.3</version> <executions> <execution> <goals> <goal>run</goal> </goals> <phase>generate-sources</phase> <configuration> <tasks> <echo file="${project.build.sourceDirectory}/${main.dir}/Version.clj" message="(ns ${main.package})${line.separator}"/> <echo file="${project.build.sourceDirectory}/${main.dir}/Version.clj" append="true" message="(def version &quot;${maven.build.timestamp}&quot;)${line.separator}"/> </tasks> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <executions> <execution> <goals> <goal>single</goal> </goals> <phase>package</phase> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass>${main.class}</mainClass> </manifest> </archive> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <redirectTestOutputToFile>true</redirectTestOutputToFile> <skipTests>false</skipTests> <skip>false</skip> </configuration> <executions> <execution> <id>surefire-it</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <skip>false</skip> </configuration> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> <scope>compile</scope> </dependency> </dependencies> </project> Here is my Clojure source file (src/main/clojure/org/dogdaze/spider_player/Deck.clj): ; Copyright 2010 Dogdaze (ns org.dogdaze.spider_player.Deck (:use [clojure.contrib.seq-utils :only (shuffle)])) (def suits [:clubs :diamonds :hearts :spades]) (def ranks [:ace :two :three :four :five :six :seven :eight :nine :ten :jack :queen :king]) (defn suit-seq "Return 4 suits: if number-of-suits == 1: :clubs :clubs :clubs :clubs if number-of-suits == 2: :clubs :diamonds :clubs :diamonds if number-of-suits == 4: :clubs :diamonds :hearts :spades." [number-of-suits] (take 4 (cycle (take number-of-suits suits)))) (defstruct card :rank :suit) (defn unshuffled-deck "Create an unshuffled deck containing all cards from the number of suits specified." [number-of-suits] (for [rank ranks suit (suit-seq number-of-suits)] (struct card rank suit))) (defn deck "Create a shuffled deck containing all cards from the number of suits specified." [number-of-suits] (shuffle (unshuffled-deck number-of-suits))) Here is my test case (src/test/clojure/org/dogdaze/spider_player/TestDeck.clj): ; Copyright 2010 Dogdaze (ns org.dogdaze.spider_player (:use clojure.set clojure.test org.dogdaze.spider_player.Deck)) (deftest test-suit-seq (is (= (suit-seq 1) [:clubs :clubs :clubs :clubs])) (is (= (suit-seq 2) [:clubs :diamonds :clubs :diamonds])) (is (= (suit-seq 4) [:clubs :diamonds :hearts :spades]))) (def one-suit-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs}]) (def two-suits-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds}]) (def four-suits-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :ace, :suit :hearts} {:rank :ace, :suit :spades} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :two, :suit :hearts} {:rank :two, :suit :spades} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :three, :suit :hearts} {:rank :three, :suit :spades} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :four, :suit :hearts} {:rank :four, :suit :spades} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :five, :suit :hearts} {:rank :five, :suit :spades} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :six, :suit :hearts} {:rank :six, :suit :spades} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :seven, :suit :hearts} {:rank :seven, :suit :spades} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :eight, :suit :hearts} {:rank :eight, :suit :spades} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :nine, :suit :hearts} {:rank :nine, :suit :spades} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :ten, :suit :hearts} {:rank :ten, :suit :spades} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :jack, :suit :hearts} {:rank :jack, :suit :spades} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :queen, :suit :hearts} {:rank :queen, :suit :spades} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds} {:rank :king, :suit :hearts} {:rank :king, :suit :spades}]) (deftest test-unshuffled-deck (is (= (unshuffled-deck 1) one-suit-deck)) (is (= (unshuffled-deck 2) two-suits-deck)) (is (= (unshuffled-deck 4) four-suits-deck))) (deftest test-shuffled-deck (is (= (set (deck 1)) (set one-suit-deck))) (is (= (set (deck 2)) (set two-suits-deck))) (is (= (set (deck 4)) (set four-suits-deck)))) (run-tests) Any idea why the test is not running? BTW, feel free to suggest improvements to the Clojure code. Thanks, Ralph

    Read the article

  • When is LINQ (to objects) Overused?

    - by Mystagogue
    My career started as a hard-core functional-paradigm developer (LISP), and now I'm a hard-care .net/C# developer. Of course I'm enamored with LINQ. However, I also believe in (1) using the right tool for the job and (2) preserving the KISS principle: of the 60+ engineers I work with, perhaps only 20% have hours of LINQ / functional paradigm experience, and 5% have 6 to 12 months of such experience. In short, I feel compelled to stay away from LINQ unless I'm hampered in achieving a goal without it (wherein replacing 3 lines of O-O code with one line of LINQ is not a "goal"). But now one of the engineers, having 12 months LINQ / functional-paradigm experience, is using LINQ to objects, or at least lambda expressions anyway, in every conceivable location in production code. My various appeals to the KISS principle have not yielded any results. Therefore... What published studies can I next appeal to? What "coding standard" guideline have others concocted with some success? Are there published LINQ performance issues I could point out? In short, I'm trying to achieve my first goal - KISS - by indirect persuasion. Of course this problem could be extended to countless other areas (such as overuse of extension methods). Perhaps there is an "uber" guide, highly regarded (e.g. published studies, etc), that takes a broader swing at this. Anything?

    Read the article

  • Running custom JAXB2 plugins using Maven JAXB 2.x Plugin

    - by nadouani
    I would like to generate JAXB Java classes using the Maven JAXB 2.x plugin http://static.highsource.org/mjiip/maven-jaxb2-plugin/generate-mojo.html To declare the custom JAXB plugins I would execute during the generate process, I used the "args" element like below: <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.4</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <extension>true</extension> <args> <arg>-Xinheritance</arg> <arg>-XtoString</arg> </args> ... </configuration> ... </plugin> The issue is that the maven generate process is failing with the following error: Failed to execute goal org.jvnet.jaxb2.maven2:maven-jaxb2-plugin:0.7.4:generate (default) on project was: Error parsing the command line [[Ljava.lang.String;@1ad4a1ae] Any idea on how to specify the args values? Thanks

    Read the article

  • Is it possible to search locally in jqGrid with treeGrid installed

    - by Nehu
    I am using jqGrid with treeGrid. I have added a filterToolbar. I would like to search locally instead of having a server call. The treegrid docs say that, "When we initialize the grid and the data is read, the datatype is automatically set to local." So, is it possible to implement local search with treeGrid. I tried the below configuration, but it is resulting in server calls. My Configuration is var grid = $("#grid").jqGrid({ treeGrid: true, treeGridModel: 'adjacency', ExpandColumn: 'businessAreaName', ExpandColClick : true, url:'agileProgramme/records.do', datatype: 'json', mtype: 'GET', colNames:['Id' , 'Business Area' , 'Investment' , 'Org' , 'Goal' ], colModel:[ /*00*/ {name:'agileProgrammeId',index:'agileProgrammeId', width:0, editable:false,hidden:true}, /*01*/ {name:'businessAreaName',index:'businessAreaName', width:160, editable:false}, /*02*/ {name:'programmeName',index:'programmeName', width:150, editable:false, classes:'link'}, /*03*/ {name:'org',index:'org', width:50, editable:false, classes:'orgHierarchy', sortable : false}, /*04*/ {name:'goal',index:'goal', width:70, editable:false} ], treeReader : { level_field: "level", parent_id_field: "parent", leaf_field: "leaf", expanded_field: "expanded" }, autowidth: true, height: 240, pager: '#pager', sortname: 'id', sortorder: "asc", toolbar:[true,"top"], caption:"TableGridDemo", emptyrecords: "Empty records", jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: false, cell: "cell", id: "agileProgrammeId" } }); And to implement the search toolbar $('#grid').jqGrid('filterToolbar', {stringResult: true,searchOnEnter : true}); Would appreciate any help or any pointer on even if it is possible?

    Read the article

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • Scheduling of tasks to a single resource using Prolog

    - by Reed Debaets
    I searched through here as best I could and though I found some relevant questions, I don't think they covered the question at hand: Assume a single resource and a known list of requests to schedule a task. Each request includes a start_after, start_by, expected_duration, and action. The goal is to schedule the tasks for execution as soon as possible while keeping each task scheduled between start_after and start_by. I coded up a simple prolog example that I "thought" should work but I've been unfortunately getting errors during run time: "=/2: Arguments are not sufficiently instantiated". Any help or advice would be greatly appreciated startAfter(1,0). startAfter(2,0). startAfter(3,0). startBy(1,100). startBy(2,500). startBy(3,300). duration(1,199). duration(2,199). duration(3,199). action(1,'noop1'). action(2,'noop2'). action(3,'noop3'). can_run(R,T) :- startAfter(R,TA),startBy(R,TB),T>=TA,T=<TB. conflicts(T,R1,T1) :- duration(R1,D1),T=<D1+T1,T>T1. schedule(R1,T1,R2,T2,R3,T3) :- can_run(R1,T1),\+conflicts(T1,R2,T2),\+conflicts(T1,R3,T3), can_run(R2,T2),\+conflicts(T2,R1,T1),\+conflicts(T2,R3,T3), can_run(R3,T3),\+conflicts(T3,R1,T1),\+conflicts(T3,R2,T2). % when traced I *should* see T1=0, T2=400, T3=200 Edit: conflicts goal wasn't quite right: needed extra TT1 clause. Edit: Apparently my schedule goal works if I supply valid Request,Time pairs ... but I'm stucking trying to force prolog to find valid values for T1..3 when given R1..3?

    Read the article

  • SSH with Perl using file handles, not Net::SSH

    - by jorge
    Before I ask the question: I can not use cpan module Net::SSH, I want to but can not, no amount of begging will change this fact I need to be able to open an SSH connection, keep it open, and read from it's stdout and write to its stdin. My approach thus far has been to open it in a pipe, but I have not been able to advance past this, it dies straight away. That's what I have in mind, I understand this causes a fork to occur. I've written code accordingly for this fork (or so I think). Below is a skeleton of what I want, I just need the system to work. #!/usr/bin/perl use warnings; $| = 1; $pid = open (SSH,"| ssh user\@host"); if(defined($pid)){ if(!$pid){ #child while(<>){ print; } }else{ select SSH; $| = 1; select STDIN; #parent while(<>){ print SSH $_; while(<SSH>){ print; } } close(SSH); } } I know, from what it looks like, I'm trying to recreate "system('ssh user@host')," that is not my end goal, but knowing how to do that would bring me much closer to the end goal. Basically, I need a file handle to an open ssh connection where I can read from it the output and write to it input (not necessarily straight from my program's STDIN, anything I want, variables, yada yada) This includes password input. I know about key pairs, part of the end goal involves making key pairs, but the connection needs to happen regardless of their existence, and if they do not exist it's part of my plan to make them exist.

    Read the article

  • With Maven2, how would I zip a directory in my resources?

    - by Benny
    I'm trying to upgrade my project from using Maven 1 to Maven 2, but I'm having a problem with one of the goals. I have the following Maven 1 goal in my maven.xml file: <goal name="jar:init"> <ant:delete file="${basedir}/src/installpack.zip"/> <ant:zip destfile="${basedir}/src/installpack.zip" basedir="${basedir}/src/installpack" /> <copy todir="${destination.location}"> <fileset dir="${source.location}"> <exclude name="installpack/**"/> </fileset> </copy> </goal> I'm unable to find a way to do this in Maven2, however. Right now, the installpack directory is in the resources directory of my standard Maven2 directory structure, which works well since it just gets copied over. I need it to be zipped though. I found this page on creating ant plugins: http://maven.apache.org/guides/plugin/guide-ant-plugin-development.html. It looks like I could create my own ant plugin to do what I need. I was just wondering if there was a way to do it using only Maven2. Any suggestions would be much appreciated. Thanks, B.J.

    Read the article

  • How do I execute a program using Maven?

    - by Will
    I would like to have a Maven goal trigger the execution of a java class. I'm trying to migrate over a Makefile with the lines: neotest: mvn exec:java -Dexec.mainClass="org.dhappy.test.NeoTraverse" And I would like mvn neotest to produce what make neotest does currently. Neither the exec plugin documentation nor the Maven Ant tasks pages had any sort of straightforward example. Currently, I'm at: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions><execution> <goals><goal>java</goal></goals> </execution></executions> <configuration> <mainClass>org.dhappy.test.NeoTraverse</mainClass> </configuration> </plugin> I don't know how to trigger the plugin from the command line, though.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >