Search Results

Search found 366 results on 15 pages for 'collecting'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Adding Fake Build Information in TFS 2010

    - by Jakob Ehn
    We have been using TFS 2010 build for distributing a build in parallel on several agents, but where the actual compilation is done by a bunch of external tools and compilers, e.g. no MSBuild involved. We are using the ParallelTemplate.xaml template that Jim Lamb blogged about previously, which distributes each configuration to a different agent. We developed custom activities for running these external compilers and collecting the information and errors by reading standard out/error and pushing it back to the build log. But since we aren’t using MSBuild we don’t the get nice configuration summary section on the build summary page that we are used to. We would like to show the result of each configuration with any errors/warnings as usual, together with a link to the log file. TFS 2010 API to the rescue! What we need to do is adding information to the InformationNode structure that is associated with every TFS build. The log that you normally see in the Log view is built up as a tree structure of IBuildInformationNode objects. This structure can we accessed by using the InformationNodeConverters class. This class also contain some helper methods for creating BuildProjectNode, which contain the information about each project that was build, for example which configuration, number of errors and warnings and link to the log file. Here is a code snippet that first creates a “fake” build from scratch and the add two BuildProjectNodes, one for Debug|x86 and one for Release|x86 with some release information:   TfsTeamProjectCollection collection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://lt-jakob2010:8080/tfs")); IBuildServer buildServer = collection.GetService<IBuildServer>(); var buildDef = buildServer.GetBuildDefinition("TeamProject", "BuildDefinition"); //Create fake build with random build number var detail = buildDef.CreateManualBuild(new Random().Next().ToString()); // Create Debug|x86 project summary IBuildProjectNode buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Debug", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 1; buildProjectNode.CompilationWarnings = 1; buildProjectNode.Node.Children.AddBuildError("Compilation", "File1.cs", 12, 5, "", "Syntax error", DateTime.Now); buildProjectNode.Node.Children.AddBuildWarning("File2.cs", 3, 1, "", "Some warning", DateTime.Now, "Compilation"); buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfiledebug.txt")); buildProjectNode.Save(); // Create Releaes|x86 project summary buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Release", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 0; buildProjectNode.CompilationWarnings = 0; buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfilerelease.txt")); buildProjectNode.Save(); detail.Information.Save(); detail.FinalizeStatus(BuildStatus.Failed); When running this code, it will a create a build that looks like this: As you can see, it created two configurations with error and warning information and a link to a log file. Just like a regular MSBuild would have done. This is very useful when using TFS 2010 Build in heterogeneous environments. It would also be possible to do this when running compilations completely outside TFS build, but then push the results of the into TFS for easy access. You can push all information, including the compilation summary, drop location, test results etc using the API.

    Read the article

  • Application Performance: The Best of the Web

    - by Michaela Murray
    Wisdom A deep understanding and realization […] resulting in the ability to apply perceptions, judgements and actions. It is also the comprehension of what is true coupled with optimum judgment as to action. - Wikipedia We’re writing a book for ASP.NET developers, and we want you to be a part of it. We know that there’s a huge amount of web developer wisdom that never gets shared, and we want to find those golden nuggets of knowledge and experience, and make sure everyone can learn from them. Right now, we want to find out about your top tips, hard-won lessons, and sage advice for avoiding, finding, and fixing application performance problems. If you work with .NET and SQL, even better – a lot of application performance relies on the interaction with the database, so we want to hear from you! “How Do You Want Me To Be Involved?” Right! Details! We want you, our most excellent readers, to email us with the Best Advice you would give to other developers for getting the best performance out of their applications. It doesn’t matter if your advice is for newbies or veterans, .NET or SQL – so long as it’s about application performance, we want to hear from you. (And if you think that there’s developer wisdom out there that “everyone knows”, a) I’m willing to bet you could find someone who doesn’t know about it, and b) it probably bears repeating anyway!) “I’m Interested. What Can You Do For Me?” Excellent question. For starters, there’s a chance to win a Microsoft Surface (the tablet, not the table-top). Once all the ASP.NET Wisdom has been collected, tallied, and labelled, it will then be weighed and measured by a team of expert judges (whose identities are still a closely-guarded secret).  The top tip in both SQL & .NET categories will each win their author their very own MS Surface. But that’s not all! We can also give you… immortality! More details? Ok. We’ll be collecting all of the tips sent in by our readers (and we can’t wait to learn from you all,) and with the help of our Simple-Talk editors, we will publish and distribute your combined and documented knowledge as a free, community-created, professionally typeset eBook. You will naturally be credited by name / pseudonym / twitter handle / GitHub username / StackOverflow profile / Whatever, as the clearly ingenious author of hot performance tips. The Not-Very-Fine Print Here’s the breakdown: We want to bring together the best application performance knowledge from ASP.NET developers. Closing date for submissions will be 9am GMT, December 4th. Submissions should be made by email – [email protected] Submissions will be judged by a panel of expert judges (who will be revealed soon). The top submission in both the SQL & .NET categories will each win a Microsoft Surface. ALL the tips which make it through the judging process will be polished by Simple-Talk editors, and turned into a professionally typeset eBook, which will be freely available, and promoted alongside the ANTS Performance Profiler tool. Anyone whose entry makes it into the book will be clearly and profusely credited in the method of their choice (or can remain anonymous.) The really REALLY short version Share what you know about ASP.NET application performance for a chance to win a Microsoft Surface, and then get your name credited in a slick eBook with top-notch production values. For more details, see above. We can’t wait to learn from you!

    Read the article

  • T-SQL Tuesday - the swag

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Kendal van Dyke (@SQLDBA), and is on the topic of swag. He asks about the best SQL Server swag that we’ve ever received from a conference. I can’t say I ever focus on getting the swag at conferences, as I see some people doing. I know there are plenty of people that get around all the sponsors as soon as they’ve arrived, collecting whatever goodies they can, sometimes as token gifts for those at home, sometimes as giveaways for the user groups they attend. I remember a few years ago at my first PASS Summit, the SQLCAT team gave me a large pile of leftover SQL Server swag to give away to my user group – piles of branded things to stop your phone sliding off your car dashboard, and other things. The user group members thought it was great, and over the course of a few months, happily cleared me out of it all. I tend to consider swag to be something that you haven’t earned except by being at a conference, and there was no winning associated with it, it was simply a giveaway item at a sponsor booth. That means I don’t include the HP Mini laptop that was given away at TechEd Australia a few years ago to every attendee, or the SQL Server bag and Camelbak bottle that I was given as a thank-you for writing a guest blog post (which I use as my regular laptop bag and water bottle for work). I don’t even include the copy of Midtown Madness that I got as a door prize at my vey first TechEd event in 1999 (that was a really good game, and even meant that when I went to Chicago last year, I felt a strange familiarity about the place). I don’t want to include shirts in the mix either. I was given a nice SQL Server shirt about five years ago TechEd Australia. It’s a business shirt (buttons, cuffs, pocket on the chest), black with the SQL Server logo on it. It was such a nice shirt that I commented about it to the Product Marketing Manager for Australia (Christine, at the time), who unexpectedly arranged for me to get another one. That was certainly an improvement on the tent I was given at one of the MVP conference I attended. So when I consider these ‘rules’, two pieces of swag come to mind, and I think both were at PASS Summits (although I can’t be sure). One was a hand-warmer from HP, one of the “crystallisation-type” ones, which proved extremely popular when I got home, until one day when it didn’t survive being recharged – not overly SQL related, but still it was good swag. The other was an umbrella, from expressor, which was from the PASS Summit in 2010, my first PASS Summit. I remember it well – Blythe Morrow (now Gietz) (@blythemorrow) was working the booth, having stopped working for PASS some time before, but she’d been on my list of people to meet, as I’d had plenty of contact with her while she’d worked at PASS, my being a chapter leader and general volunteer. There had been an expressor dinner on one of the first evenings, which I’d been asked to be at, which is when I’d met lots of SQL people in person for the first time, including Ted Krueger (@onpnt), Jessica Moss (@jessicamoss) and Blythe. Anyway, at some point the next day I swung by their booth to say hello and thank them for the dinner, and Blythe says “Oh, we have the best swag – here!” and handed me an umbrella. And she was right. It’s excellent. @rob_farley

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Resolve SRs Faster Using RDA - Find the Right Profile

    - by Daniel Mortimer
    Introduction Remote Diagnostic Agent (RDA) is an excellent command-line data collection tool that can aid troubleshooting / problem solving. The tool covers the majority of Oracle's vast product range, and its data collection capability is comprehensive. RDA collects data about the operating system and environment, including environment variable, kernel settings network o/s performance o/s patches and much more the Oracle Products installed, including patches logs and debug metrics configuration and much more In effect, RDA can obtain a snapshot of an Oracle Product and its environment. Oracle Support encourages the use of RDA because it greatly reduces service request resolution time by minimizing the number of requests from Oracle Support for more information. RDA is designed to be as unobtrusive as possible; it does not modify systems in any way. It collects useful data for Oracle Support only and a security filter is provided if required. Find and Use the Right RDA Profile One problem of any tool / utility, which covers a large range of products, is knowing how to target it against only the products you wish to troubleshoot. RDA does not have a GUI. Nor does RDA have an intelligent mechanism for detecting and automatically collecting data only for those Oracle products installed. Instead, you have to tell RDA what to do. There is a mind boggling large number of RDA data collection modules which you can configure RDA to use. It is easier, however, to setup RDA to use a "Profile". A profile consists of a list of data collection modules and predefined settings. As such profiles can be used to diagnose a problem with a particular product or combination of products. How to run RDA with a profile? ( <rda> represents the command you selected to run RDA (for example, rda.pl, rda.cmd, rda.sh, and perl rda.pl).) 1. Use the embedded spreadsheet to find the RDA profile which is appropriate for your problem / chosen Oracle Fusion Middleware products. 2. Use the following command to perform the setup <rda> -S -p <profile_name>  3. Run the data collection <rda> Run the data collection. If you want to perform setup and run in one go, then use a command such as the following: <rda> -vnSCRP -p <profile name> For more information, refer to: Remote Diagnostic Agent (RDA) 4 - Profile Manual Pages [ID 391983.1] Additional Hints / Tips: 1. Be careful! Profile names are case sensitive.2. When profiles are not used, RDA considers all existing modules by default. For example, if you have downloaded RDA for the first time and run the command <rda> -S you will see prompts for every RDA collection module many of which will be of no interest to you. Also, you may, in your haste to work through all the questions, forget to say "Yes" to the collection of data that is pertinent to your particular problem or product. Profiles avoid such tedium and help ensure the right data is collected at the first time of asking.

    Read the article

  • Guest blog: A Closer Look at Oracle Price Analytics by Will Hutchinson

    - by Takin Babaei
    Overview:  Price Analytics helps companies understand how much of each sale goes into discounts, special terms, and allowances. This visibility lets sales management see the panoply of discounts and start seeing whether each discount drives desired behavior. In Price Analytics monitors parts of the quote-to-order process, tracking quotes, including the whole price waterfall and seeing which result in orders. The “price waterfall” shows all discounts between list price and “pocket price”. Pocket price is the final price the vendor puts in its pocket after all discounts are taken. The value proposition: Based on benchmarks from leading consultancies and companies I have talked to, where they have studied the effects of discounting and started enforcing what many of them call “discount discipline”, they find they can increase the pocket price by 0.8-3%. Yes, in today’s zero or negative inflation environment, one can, through better monitoring of discounts, collect what amounts to a price rise of a few percent. We are not talking about selling more product, merely about collecting a higher pocket price without decreasing quantities sold. Higher prices fall straight to the bottom line. The best reference I have ever found for understanding this phenomenon comes from an article from the September-October 1992 issue of Harvard Business Review called “Managing Price, Gaining Profit” by Michael Marn and Robert Rosiello of McKinsey & Co. They describe the outsized impact price management has on bottom line performance compared to selling more product or cutting variable or fixed costs. Price Analytics manages what Marn and Rosiello call “transaction pricing”, namely the prices of a given transaction, as opposed to what is on the price list or pricing according to the value received. They make the point that if the vendor does not manage the price waterfall, customers will, to the vendor’s detriment. It also discusses its findings that in companies it studied, there was no correlation between discount levels and any indication of customer value. I urge you to read this article. What Price Analytics does: Price analytics looks at quotes the company issues and tracks them until either the quote is accepted or rejected or it expires. There are prebuilt adapters for EBS and Siebel as well as a universal adapter. The target audience includes pricing analysts, product managers, sales managers, and VP’s of sales, marketing, finance, and sales operations. It tracks how effective discounts have been, the win rate on quotes, how well pricing policies have been followed, customer and product profitability, and customer performance against commitments. It has the concept of price waterfall, the deal lifecycle, and price segmentation built into the product. These help product and sales managers understand their pricing and its effectiveness on driving revenue and profit. They also help understand how terms are adhered to during negotiations. They also help people understand what segments exist and how well they are adhered to. To help your company increase its profits and revenues, I urge you to look at this product. If you have questions, please contact me. Will HutchinsonMaster Principal Sales Consultant – Analytics, Oracle Corp. Will Hutchinson has worked in the business intelligence and data warehousing for over 25 years. He started building data warehouses in 1986 at Metaphor, advancing to running Metaphor UK’s sales consulting area. He also worked in A.T. Kearney’s business intelligence practice for over four years, running projects and providing training to new consultants in the IT practice. He also worked at Informatica and then Siebel, before coming to Oracle with the Siebel acquisition. He became Master Principal Sales Consultant in 2009. He has worked on developing ROI and TCO models for business intelligence for over ten years. Mr. Hutchinson has a BS degree in Chemical Engineering from Princeton University and an MBA in Finance from the University of Chicago.

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Validation and authorization in layered architecture

    - by SonOfPirate
    I know you are thinking (or maybe yelling), "not another question asking where validation belongs in a layered architecture?!?" Well, yes, but hopefully this will be a little bit of a different take on the subject. I am a firm believer that validation takes many forms, is context-based and varies at each level of the architecture. That is the basis for the post - helping to identify what type of validation should be performed in each layer. In addition, a question that often comes up is where authorization checks belong. The example scenario comes from an application for a catering business. Periodically during the day, a driver may turn in to the office any excess cash they've accumulated while taking the truck from site to site. The application allows a user to record the 'cash drop' by collecting the driver's ID, and the amount. Here's some skeleton code to illustrate the layers involved: public class CashDropApi // This is in the Service Facade Layer { [WebInvoke(Method = "POST")] public void AddCashDrop(NewCashDropContract contract) { // 1 Service.AddCashDrop(contract.Amount, contract.DriverId); } } public class CashDropService // This is the Application Service in the Domain Layer { public void AddCashDrop(Decimal amount, Int32 driverId) { // 2 CommandBus.Send(new AddCashDropCommand(amount, driverId)); } } internal class AddCashDropCommand // This is a command object in Domain Layer { public AddCashDropCommand(Decimal amount, Int32 driverId) { // 3 Amount = amount; DriverId = driverId; } public Decimal Amount { get; private set; } public Int32 DriverId { get; private set; } } internal class AddCashDropCommandHandler : IHandle<AddCashDropCommand> { internal ICashDropFactory Factory { get; set; } // Set by IoC container internal ICashDropRepository CashDrops { get; set; } // Set by IoC container internal IEmployeeRepository Employees { get; set; } // Set by IoC container public void Handle(AddCashDropCommand command) { // 4 var driver = Employees.GetById(command.DriverId); // 5 var authorizedBy = CurrentUser as Employee; // 6 var cashDrop = Factory.CreateCashDrop(command.Amount, driver, authorizedBy); // 7 CashDrops.Add(cashDrop); } } public class CashDropFactory { public CashDrop CreateCashDrop(Decimal amount, Employee driver, Employee authorizedBy) { // 8 return new CashDrop(amount, driver, authorizedBy, DateTime.Now); } } public class CashDrop // The domain object (entity) { public CashDrop(Decimal amount, Employee driver, Employee authorizedBy, DateTime at) { // 9 ... } } public class CashDropRepository // The implementation is in the Data Access Layer { public void Add(CashDrop item) { // 10 ... } } I've indicated 10 locations where I've seen validation checks placed in code. My question is what checks you would, if any, be performing at each given the following business rules (along with standard checks for length, range, format, type, etc): The amount of the cash drop must be greater than zero. The cash drop must have a valid Driver. The current user must be authorized to add cash drops (current user is not the driver). Please share your thoughts, how you have or would approach this scenario and the reasons for your choices.

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • Ideas for OpenSource CMS in ASP.NET MVC

    - by rajesh pillai
    I am in the process of collecting ideas for building an opensource CMS based on ASP.net framework. I have choosen ASP.NET MVC with Jquery as the tool to develop this. I have made this as community wiki. Background: Most of the good CMS that is available is built on PHP, though off late CMS built on ASP.net framework seems to be cropping up. I would like to collect ideas/suggestion/expectations from an opensource CMS system for ASP.net platform. I am looking for expectation from technology and features that you wish could find in a modern CMS and any other thoughts/ideas that comes to your mind. Your input would be of great help in this direction. Meanwhile I am also reviewing many opensource CMS system built on ASP.net as well as MS Office Sharepoint to get ideas and I would update my findings here for your reference. The following are some of the opensource CMS/BlogEngines that I am in the process of reviewing. -Oxite (ASP.net MVC) : This is the new kid on the block -Wordpress -BlogEngine.net -Umbraco Some of the features that I can think of is noted below Simplified content creation Support Multiple content author Metadata feature Workflow engine Simplified deployment List based contents (sharepoint like) Customizable URL's Support content Caching Roles (contenauthor, contentpublisher etc) Support different types of content (like html, txt, document, image, videos) Skinnable (support extensible themes) Localization & Globalization Unlimited nesting of categories Readymade template for blog, forums,survey. Good documentation You can add your points or add some depth to any of the above feature.

    Read the article

  • C# Wholesale Order form - textboxes in Gridviews in Repeater

    - by tnriverfish
    I'm building a wholesale order form on a website. The current plan is to... -get an ArrayList of DepartmentUnits -a DepartmentUnit has various attributes like "deptId", "description" and its own ArrayList of StoreItems -The StoreItems have attached ArrayList of various SizeOptions -The SizeOptions have an inventory count integer along with their description -Planning on putting an asp:Repeater on the page that has an asp:GridView in it -Each DepartmentUnit will have its own GridView -EachStore item will have a row in the GridView -Each SizeOption will have a TextBox in the row (approximately 10 options) -Each inventory count will be watermarked over the size option textbox The question becomes how will I then collect all this information correctly once the form has been filled out? I don't like the idea of putting all this information in an update panel and then posting back each time a GridView row or worse one of the row's textboxes changes. Any advice on putting a single save button on the page and looping through each Repeater item - and each GridViewRow - and each textbox - to get all the values entered? Better to try collecting all the items added in a single table at the bottom of the page and updating the string with jquery each time a text box is modified? Then just looping through the new table when saved? Not sure I know how to loop through that table though - updating if quantity is changed might be a bear too. If it considerably simplifies the process I could just remove the Repeater aspect and put separate GridViews on separate pages. Thanks!

    Read the article

  • Storing PLSQL stored-procedure values in Oracle memory caches for extended periods

    - by Ira Baxter
    I am collecting runtime profiling data from PLSQL stored procedures. The data is collected as certain stored procedures execute, but it needs to accumululate across multiple executions of those procedures. To minimize overhead, I'd like to store that profiling data in some PLSQL-accessable Oracle memory-resident storage somewhere for the duration of the data collection interval, and then dump out the accumulated values. The data collection interval might be seconds or hours; its ok not to store this data across system boots. Something like session state in web servers would do. What are my choices for storing such data? The only method I know about are contexts in dbms_sessions: procedure set_ctx (value in varchar8) as begin dbms_session.set_context ( 'Test_Ctx', 'AccumulatedValue', value, NULL, 'ProfilerSessionId' ); end set_ctx; This works, but takes some 50 milliseconds(!) per update to the accumulated value. What I'm hoping for is a way to access/store an array of values in some Oracle memory using vanilla PLSQL statements, with access times typical of array accesses made to package-local arrays.

    Read the article

  • How do you jump to a particular row in a DataGridView by typing (a la Windows Explorer details view)

    - by russcollier
    I have a .NET Winforms app in C# with a DataGridView that's read-only and populated with some number of rows. I'd like to have functionality similar to Windows Explorer's (and many other applications) details view for example. I'd like the DataGridView to behave such that when it has focus if you start typing, the current row selection will jump to the row where the (string) value of cell 0 (i.e. the first column in the row) starts with the characters you typed in. For example, if I have a DataGridView with 1 column and the following rows: Bob Jane Jason John Leroy Sam If the DataGridView has focus and I hit the 'b' key on my keyboard, the selected row is now "Bob". If I quickly type in the keys 'ja', the selected row is Jane. If I quickly type in the letters 'jas', the selected row is Jane. If I hit the 'z' key, nothing is selected (since nothing starts with Z). Likewise if Jane is currently selected and I keep typing the letter 'j', the selection will cycle through to Jason, then John, then back to Jane, each time I hit the 'j' key. I've been doing some Googling (and "stackoverflowing" :-)) for awhile and cannot find any examples of this type of functionality. I have a rough idea in my head to do this via some sort of short lived timer thread, collecting the keystrokes on KeyPress events for the DataGridView, and selecting rows based on those collected keystrokes matching a Cells[0].Value.StartsWith() type of condition. But it seems like there has to be an easier way that I'm just not seeing. Any ideas would be much appreciated. Thanks!

    Read the article

  • In git, how can I get the diff between two dates?

    - by Weidenrinde
    Basically, I am looking for something equivalent to cvs diff -D"1 day ago" -D"2010-02-29 11:11". While collecting more and more information, I found a solution. I paste it here, for others that might have similar problems. Things I have tried: In git, how can I get the diff between all the commits that occured between two dates? was ansered here with: git whatchanged --since="1 day ago" -p But this gives a diff for each commit, even if there are multiple commits in one file. I know that "date" is a bit of a loose concept in git, I thought there must be some way to do this. git diff 'master@{1 day ago}..master gives some warning warning: Log for 'master' only goes back to Tue, 16 Mar 2010 14:17:32 +0100. and does not show old diffs. git format-patch --since=yesterday --stdout does not give anything. revs=$(git log --pretty="format:%H" --since="1 day ago");git diff $(echo "$revs"|tail -n1) $(echo "$revs"|head -n1) works somehow, but seems complicated and does not restrict to the current branch. git diff $(git rev-list -n1 --before="1 day ago" master) seems to work and a default way to do similar things, although more complicated than I thought. Funnily, git-cvsserver does not support "cvs diff -D" (without that it is documented somewhere).

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • AS3 Memory Conservation (Loaders/BitmapDatas/Bitmaps/Sprites)

    - by rinogo
    I'm working on reducing the memory requirements of my AS3 app. I understand that once there are no remaining references to an object, it is flagged as being a candidate for garbage collection. Is it even worth it to try to remove references to Loaders that are no longer actively in use? My first thought is that it is not worth it. Here's why: My Sprites need perpetual references to the Bitmaps they display (since the Sprites are always visible in my app). So, the Bitmaps cannot be garbage collected. The Bitmaps rely upon BitmapData objects for their data, so we can't get rid of them. (Up until this point it's all pretty straightforward). Here's where I'm unsure of what's going on: Does a BitmapData have a reference to the data loaded by the Loader? In other words, is BitmapData essentially just a wrapper that has a reference to loader.content, or is the data copied from loader.content to BitmapData? If a reference is maintained, then I don't get anything by garbage collecting my loaders... Thoughts?

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • paypal express checkout integration in asp classic

    - by Noam Smadja
    i am trying to figure this out for almost a week now.. with no success.. i am coding in ASP and would love to receive some help. i am following the steps from paypal wizard here: https://www.paypal-labs.com/integrationwizard/ecpaypal/code.php i am collecting all the information on my website i just want to pass it to paypal when the buyer clicks checkout. so i pointed the checkout form to expresschackout.asp as pointed in the wizard. but when i click on the paypal button i get a white page with nothing. no errors no nothing. it just hangs there on expresscheckout.asp shopping cart: ..code showing a review of the shopping cart.. ..i saved the total amount into SESSION("Payment_Amount").. <form action='cart/expresscheckout.asp' METHOD='POST'> <input type='image' name='submit' src='https://www.paypal.com/en_US/i/btn/btn_xpressCheckout.gif' border='0' align='top' alt='Check out with PayPal'/> </form>

    Read the article

  • Media recommendation engine - Single user system - How to start

    - by Microkernel
    Hi guys, I want to implement a media recommendation engine. I saw a similar posts on this, but I think my requirements are bit different from those, so posting here. Here is the deal. I want to implement a recommendation engine for media players like VLC, which would be an engine that has to care for only single user. Like, it would be embedded in a media player on a PC which is typically used by single user. And it will start learning the likes and dislikes of the user and gradually learns what a user likes. Here it will not be able to find similar users for using their data for recommendation as its a single user system. So how to go about this? Or you can consider it as a recommendation engine that has to be put in say iPods, which has to learn about a single user and recommend music/Movies from the collections it has. I thought of start collecting the genre of music/movies (maybe even artist name) that user watches and recommend movies from the most watched Genre, but it look very crude, isn't it? So is there any algorithms I can use or any resources I can refer up to? Regards, MicroKernel :)

    Read the article

  • Convert a nested array into a flat array with PHP

    - by Ben Fransen
    Hello all, I'm trying to create a generic database mapping class with PHP. Collecting the data through my functions is going well, but as expected I'm retrieving a nested set. A print_r of my received array looks like: Array ( [table] => Session [columns] => Array ( [0] => `Session`.`ID` AS `Session_ID` [1] => `Session`.`User` AS `Session_User` [2] => `Session`.`SessionID` AS `Session_SessionID` [3] => `Session`.`ExpiresAt` AS `Session_ExpiresAt` [4] => `Session`.`CreatedAt` AS `Session_CreatedAt` [5] => `Session`.`LastActivity` AS `Session_LastActivity` [6] => `Session`.`ClientIP` AS `Session_ClientIP` ) [0] => Array ( [table] => User [columns] => Array ( [0] => `User`.`ID` AS `User_ID` [1] => `User`.`UserName` AS `User_UserName` [2] => `User`.`Password` AS `User_Password` [3] => `User`.`FullName` AS `User_FullName` [4] => `User`.`Address` AS `User_Address` ) [0] => Array ( [table] => Address [columns] => Array ( [0] => `Address`.`ID` AS `Address_ID` [1] => `Address`.`UserID` AS `Address_UserID` [2] => `Address`.`Street` AS `Address_Street` [3] => `Address`.`City` AS `Address_City` ) ) ) ) To simplify things I want to recreate this nested array to a flat array so I can easily loop through it and use the 'columns' key to create my SELECT query. I'm kinda struggling with this for a while now and figures, maybe some users at SO can help me out here. I've tried multiple things with recursion, all without luck so far... Any help is much appriciated! Thanks in advance, Ben Fransen

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >