Search Results

Search found 658 results on 27 pages for 'extreme'.

Page 18/27 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • What are the alternatives to "overriding a method" when using composition instead of inheritance?

    - by Sebastien Diot
    If we should favor composition over inheritance, the data part of it is clear, at least for me. What I don't have a clear solution to is how overwriting methods, or simply implementing them if they are defined in a pure virtual form, should be implemented. An obvious way is to wrap the instance representing the base-class into the instance representing the sub-class. But the major downsides of this are that if you have say 10 methods, and you want to override a single one, you still have to delegate every other methods anyway. And if there were several layers of inheritance, you have now several layers of wrapping, which becomes less and less efficient. Also, this only solve the problem of the object "client"; when another object calls the top wrapper, things happen like in inheritance. But when a method of the deepest instance, the base class, calls it's own methods that have been wrapped and modified, the wrapping has no effect: the call is performed by it's own method, instead of by the highest wrapper. One extreme alternative that would solve those problems would be to have one instance per method. You only wrap methods that you want to overwrite, so there is no pointless delegation. But now you end up with an incredible amount of classes and object instance, which will have a negative effect on memory usage, and this will require a lot more coding too. So, are there alternatives (preferably alternatives that can be used in Java), that: Do not result in many levels of pointless delegation without any changes. Make sure that not only the client of an object, but also all the code of the object itself, is aware of which implementation of method should be called. Does not result in an explosion of classes and instances. Ideally puts the extra memory overhead that is required at the "class"/"particular composition" level (static if you will), rather than having every object pay the memory overhead of composition. My feeling tells me that the instance representing the base class should be at the "top" of the stack/layers so it receives calls directly, and can process them directly too if they are not overwritten. But I don't know how to do it that way.

    Read the article

  • How to refactor a Python “god class”?

    - by Zearin
    Problem I’m working on a Python project whose main class is a bit “God Object”. There are so friggin’ many attributes and methods! I want to refactor the class. So Far… For the first step, I want to do something relatively simple; but when I tried the most straightforward approach, it broke some tests and existing examples. Basically, the class has a loooong list of attributes—but I can clearly look over them and think, “These 5 attributes are related…These 8 are also related…and then there’s the rest.” getattr I basically just wanted to group the related attributes into a dict-like helper class. I had a feeling __getattr__ would be ideal for the job. So I moved the attributes to a separate class, and, sure enough, __getattr__ worked its magic perfectly well… At first. But then I tried running one of the examples. The example subclass tries to set one of these attributes directly (at the class level). But since the attribute was no longer “physically located” in the parent class, I got an error saying that the attribute did not exist. @property I then read up about the @property decorator. But then I also read that it creates problems for subclasses that want to do self.x = blah when x is a property of the parent class. Desired Have all client code continue to work using self.whatever, even if the parent’s whatever property is not “physically located” in the class (or instance) itself. Group related attributes into dict-like containers. Reduce the extreme noisiness of the code in the main class. For example, I don’t simply want to change this: larry = 2 curly = 'abcd' moe = self.doh() Into this: larry = something_else('larry') curly = something_else('curly') moe = yet_another_thing.moe() …because that’s still noisy. Although that successfully makes a simply attribute into something that can manage the data, the original had 3 variables and the tweaked version still has 3 variables. However, I would be fine with something like this: stooges = Stooges() And if a lookup for self.larry fails, something would check stooges and see if larry is there. (But it must also work if a subclass tries to do larry = 'blah' at the class level.) Summary Want to replace related groups of attributes in a parent class with a single attribute that stores all the data elsewhere Want to work with existing client code that uses (e.g.) larry = 'blah' at the class level Want to continue to allow subclasses to extend, override, and modify these refactored attributes without knowing anything has changed Is this possible? Or am I barking up the wrong tree?

    Read the article

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

  • June IOUG events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Independent Oracle User Group (IOUG) Regional Events: June 11-12, 2012 – Broomfield, CO 2-Day Seminar- “ High Performance PL/SQL & Oracle Database 11g New Features” Steven Feuerstein, generally considered the world’s leading PL/SQL expert, will be presenting his all-new, 2-day, “Higher Performance PL/SQL and Oracle 11g PL/SQL New Features” seminar on June 11 & 12 at Level 3 Communications in Broomfield, Colorado.  This will be Steven’s first Denver seminar in almost 4  years.  Who knows when he will offer another? http://www.rmoug.org/ June 14, 2012 – Ottawa, Ontario Pythian’s Gwen Shapira puts on 3 great presentations focused on NoSQL, making OLTP run fast and Big Data. http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:1317735724699447::NO June 21, 2012 – Calgary, Alberta Big Data and Extreme Analytics Summit http://coug.ab.ca/ June 22, 2012 – Westborough, MA 10 Things You Probably Did Not Know? With Tom Kyte PL/SQL turns 23 years old this year. It was first introduced in 1988 with Oracle6 Database. This session looks at five technical things about PL/SQL you probably did not know: under-the-covers features that make PL/SQL quite simply the most efficient language with which to process data in the database. http://noug.com/  June 28/29, 2012 – Plano, Texas Jonathan Lewis Oracle Performance Seminars The DOUG (DALLAS ORACLE USERS GROUP) has invited SpeakTech to return to Dallas, and they’re bringing Jonathan Lewis! Topics are Beating the Oracle Optimizer – June 28, 2012, Trouble Shooting & Tuning – June 29, 2012 http://www.eventbrite.com/event/3082448687

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • BI&EPM in Focus April 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Oracle OpenWorld call for papers now open, now through April 9 (link) Oracle Announces Availability of Oracle Exalytics In-Memory Machine (link) Oracle EPM and BI Support Newsletter Current Edition - Volume 3 : March 2012 (link) Customers Asiana Airlines Improves Passenger Management with Near-Real-Time Reservation and Ticketing Information  Centraal Boekhuis Delivers Faster with Oracle BI 11g Essatto Software Speeds Data Aggregation Tenfold; Integrates BI, Performance Management, and Data Warehousing for Midsize Businesses Grupo WTorre Supports Management's Decision-Making with OBIEE, Ensuring Uniform, Reliable, and Consistent Data Indian Overseas Bank Cuts Planning Schedule by 45 Worker Days per Year, Assesses Market Risk Instantly with Business Intelligence System Kentucky Community and Technical College System Enables Data-Driven Decision-Making Using Integrated System with Management Dashboards National Australia Bank Achieves 200% ROI, Improves Data Quality and Reporting Integrity with Oracle Hyperion DRM R.L. Polk & Co. Enhances Business Intelligence Capabilities, Optimizes System Performance with Extreme Analytics Machine Test ResCare, Inc. Transforms Reporting to Improve Healthcare Service Performance with Oracle Business Analytics  Rochester City School District Uses OBIEE to Track Student Achievement, Identify Areas for Improvement, Accelerate Reporting  Société Générale Standardizes, Accelerates, and Improves Budget Planning Accuracy across Global Enterprise The State Accounting Office of Georgia Integrates Financial Information, Shortens Financial Closings and Streamlines Reporting across 175 Organizations   Events 4-day Oracle Real-Time Decisions Hands-on Technical Workshop for Partners (PTS, Free) May 14-17, 2012: Colombes, Paris, France Nordic events : “Latest Release of Oracle Hyperion EPM and BI Suites Helps Organizations Plan through Uncertainty, Improve Decision-Making and Meet Regulatory Requirements” (April 17, Sweden | April 18, Norway | April 19, Denmark | April 24, Finland) Webcast Replay from Balaji Yelamanchili and Paul Rodwick: “Analytics Without Limits - The Latest on Oracle Exalytics In-Memory Machine and Oracle Business Intelligence”  (link)  Wednesday, April 04, 2012: Business Analytics launch webcast: Invite your customers to register (link) Big Data Online Forum now available on Demand (link)  Enterprise Performance Management Webcast Replay: Accurate Forecasting within the Business Planning Cycle (link) Oracle Hyperion Profitability and Cost Management (HPCM) Master Support Note (link) Business  Intelligence Whitepaper: Driving Innovation Through Analytics (link) Gartner: CIOs Identify BI as the No. 1 Technology Priority for 2012 (link) Webcast Replay: Exalytics in Action: Airlines, US Census and Federal Spending Demo Applications  (link) NEWLY RELEASED Walk-in Video for Exalytics - Use This to Start Customer/Partner Meetings! (link) IDC Insight Paper: “Oracle's All-Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages” (link) System Requirements and Supported Platforms for Oracle Business Intelligence Suite Enterprise Edition 11gR1 Certification Matrix now published to include OBIEE 11.1.1.6.0 (link) Maintenance Release Guide (List of Bugs Fixed) for Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.6.0  (link) OBIEE 11.1.1.6: Is OBIEE 11.1.1.6 Certified With OBI Apps 7.9.6.3?  (link) Information Center: Troubleshooting Oracle Business Intelligence Applications (support login req'd)  (link)      

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-14

    - by Bob Rhubart
    Duke's Choice Award Nominations Close Friday! | The Java Source The Duke's Choice Awards celebrate extreme innovation in the world of Java technology. Nominate an individual, a group or company who show the best in Java innovation. Nominate at Java.net/dukeschoice. Nominations are open until this Friday, June 15. Whole Lotta Virtualization Goin' On | Rick Ramsey The OTN Garage's Rick Ramsey shares a list of recent Virtualization articles available on OTN, along with a link to a video by The Killer, Mr Jerry Lee Lewis. A Pragmatic Path to Navigating your Infrastructure to the Cloud | The WebLogic Server Blog Ruma Sanyal offers an overview of a recent Oracle webcast featuring Gartner VP and Distinguished Analyst Andy Butler and Vice President and Gartner Fellow Massimo Pezzini. Migrating C/C++ embedded SQL code | Tom Laszewski Cloud migration expert Tom Laszewski explains the how-to in 5 easy steps. Aetna Dumps Its Siloed Enterprise Architecture for SOA | CIO.com CIO writer Stephanie Overby tells the story of how one major health insurance provider put the "Enterprise" back in Enterprise Architecture. (H/T to Joe McKendrick for this story.) Downloading specific video renditions in WebCenter Content | Kyle Hatlestad How-to from Oracle WebCenter & ADF A-Team blogger Kyle Hatlestad. Eclipse DemoCamp - June 2012 - Redwood Shores, CA Location: Oracle HQ - 10 Twin Dolphin Drive, Redwood Shores, CA (Map) Date and Time: Wednesday, June 13, 2012. From 6pm - 9pm Agenda: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Integrating BIRT into Applications, Ashwini Verma, Actuate Corporation Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle NVIDIA® NsightTM Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools) 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in San Francisco Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. BI Architecture Master Class for Partners – Oracle Architecture Unplugged Date: June 21, 2012 No slides, no fluff. This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. The focus will be on architectural issues and considerations. DevOps: Evolving to Handle Disruption | JP Morgenthal The subject of DevOps came up this week during an OTN ArchBeat podcast interview with Ron Batra and James Baty on the role of the cloud architect (that program will be available in a few weeks). Morgenthal's article for InfoQ offers a good overview of what DevOps is and how it works. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger Dijkstra Source: softwarequotes.com

    Read the article

  • Kicking off the ODI12c Blog Series

    - by Madhu Nair
    Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 It is always exciting to talk about a new release, especially one as significant as the newly released Oracle Data Integrator 12c (ODI12c). Why? Because it is packed with features that addresses many requirements for the user community. If you missed sneak previews at this year's Oracle Open World sessions, do not despair. Because over the coming weeks the ODI12c team of developers and consultants will be sharing their perspective on key features, experiences and best practices for ODI12c right here through a series of blogs. Before diving into feature details in subsequent blogs it helps to understand the overall themes that went into developing ODI12c. Let the Productivity Flow: Let us face it. Designing for developer user experience is always top of mind to any enterprise software. ODI12c addresses this through the introduction of declarative flow based mappings (the topic of our next ODI blog by the way!!). Reusability has been addressed though the introduction of reusable mappings cutting down development times for repeated logics. An enhanced debugger makes life easy for complex granular debugging scenarios. Unique repository IDs now allow you to manage multiple repositories. Performance is Paramount: Another major area of focus for ODI12c is performance. Increased parallelism (like the multiple target table load feature), reduced session overheads and ability to customize loads plans through physical views all empower the user to tune run times for extreme performances. mapping showing multiple target load physical representation allowing users to choose execution options Integrating it all: This release is not just about ODI12c as a standalone product. Closer integration with Oracle GoldenGate now brings Change Data Capture (CDC) capabilities into ODI12c. Oracle Warehouse Builder (OWB) jobs can now be executed and monitored from within ODI12c. And ODI12c is fast becoming the de facto standard for Oracle Applications that need data integration in their solutions. The best example being the latest release of the Oracle BI Applications technology. Even as we bring you in-depth write-ups about the features there are some great previews and resources that are already out there. Like this super entry by beta partner Rittman Mead Consulting and this ODI12c Key Features White Paper. You can download ODI12c here (this post helps). The best though is the upcoming Executive Webcast featuring customers and executives who have seen and conceived the product. Don’t miss it!

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • SQL SERVER – SSAS – Multidimensional Space Terms and Explanation

    - by pinaldave
    I was presenting on SQL Server session at one of the Tech Ed On Road event in India. I was asked very interesting question during ‘Stump the Speaker‘ session. I am sharing the same with all of you over here. Question: Can you tell me in simple words what is dimension, member and other terms of multidimensional space? There is no simple example for it. This is extreme fundamental question if you know Analysis Service. Those who have no exposure to the same and have not yet started on this subject, may find it a bit difficult. I really liked his question so I decided to answer him there as well blog about the same over here. Answer: Here are the most important terms of multidimensional space – dimension, member, value, attribute and size. Dimension – It describes the point of interests for analysis. Member – It is one of the point of interests in the dimension. Value – It uniquely describes the member. Attribute – It is collection of multiple members. Size – It is total numbers for any dimension. Let us understand this further detail taking example of any space. I am going to take example of distance as a space in our example. Dimension – Distance is a dimension for us. Member – Kilometer – We can measure distance in Kilometer. Value – 4 – We can measure distance in the kilometer unit and the value of the unit can be 4. Attribute – Kilometer, Miles, Meter – The complete set of members is called attribute. Size – 100 KM – The maximum size decided for the dimension is called size. The same example can be also defined by using time space. Here is the example using time space. Dimension – Time Member – Date Value – 25 Attribute – 1, 2, 3…31 Size – 31 I hope it is clear enough that what are various multidimensional space and its terms. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-05

    - by Bob Rhubart
    Webcast: Oracle Maximum Availability Architecture Best Practices event.on24.com Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Oracle expert Tom Kyte discusses how Oracle’s Maximum Availability Architecture can help to minimize the costs and risk of downtime. Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya blogs.oracle.com An illustrated how-to from Oracle Fusion Middleware A-Team blogger Nicolas Montoya. Get Proactive with Fusion Middleware | Daniel Mortimer blogs.oracle.com Daniel Mortimer shows how to access "a one stop shop for navigating to proactive support material, tools, and communication channels related to Oracle Fusion Middleware." Build an enterprise on 'other peoples' work', via SOA and cloud | Joe McKendrick www.zdnet.com Are you down with OPW? Joe McKendrick's synopsis of a recent presentation by David Linthicum focuses on reuse. Oracle Fusion Middleware Security: Unsolicited login with OAM 11g | Chris Johnson fusionsecurity.blogspot.com Chris Johnson shows how to create a shopping cart login model using "plain old HTML." How to use the Human WorkFlow Web Services | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows how to invoke two WorkFlow web services to query the Human task in Oracle SOA Suite with your own ordering and restrictions. Bad Practice Use Case for LOV Performance Implementation in ADF BC | Andrejus Baranovskis andrejusb.blogspot.com "If you want to learn something well, there is nothing better [than] to learn bad practices first," says Oracle ACE Director Andrejus Baranovskis. Thought for the Day "The best meetings get real work done. When your people learn that your meetings actually accomplish something, they will stop making excuses to be elsewhere." — Larry Constantine

    Read the article

  • Oracle Manageability Presentations at Collaborate 2012

    - by Get_Specialized!
    Attending the Collaborate 2012 event, April 22-26th in Las Vegas, and interested in learning more about becoming specialized on Oracle Manageability? Be sure and checkout these sessions below presented by subject matter experts while your onsite. Set up a meeting or be one of the first Oracle Partners onsite to ask me, and we'll request one of the limited FREE Oracle Enterprise Manager 12c partner certification exam vouchers for you. Can't travel this year? the  COLLABORATE 12 Plug Into Vegas may be another option for you to attend from your own desk presentations like session #489 Oracle Enterprise Manager 12c: What's Changed? What's New? presented by Oracle Specialized Partners like ROLTA   Session ID Title Presented by Day/Time 920 Enterprise Manager 12c Cloud Control: New Features and Best Practices Dell Sun 9536 Release 12 Apps DBA 101 Justadba, LLC Mon 932 Monitoring Exadata with Cloud Control Oracle Mon 397 OEM Cloud Control Hands On Performance Tuning Mon 118 Oracle BI Sys Mgmt Best Practices & New Features Rittman Mead Consulting Mon 548 High Availability Boot Camp: RAC Design, Install, Manage Database Administration, Inc Mon 926 The Only Complete Cloud Management Solution -- Oracle Enterprise Manager Oracle Mon 328 Virtualization Boot Camp Dell Mon 292 Upgrading to Oracle Enterprise Manager 12c - Best Practices Southern Utah University Mon 793 Exadata 101 - What You Need to Know Rolta Tues 431 & 1431 Extreme Database Administration: New Features for Expert DBAs Oracle Tue Wed 521 What's New for Oracle WebLogic Management: Capabilities that Scripting Cannot Provide Oracle Thu 338 Oracle Real Application Testing: A look under the hood PayPal Tue 9398 Reduce TCO Using Oracle Application Management Suite for Oracle E-Business Suite Oracle Tue 312 Configuring and Managing a Private Cloud with Oracle Enterprise Manager 12c Dell Tue 866 Making OEM Sing and Dance with EMCLI Portland General Electric Tue 533 Oracle Exadata Monitoring: Engineered Systems Management with Oracle Enterprise Manager Oracle Wed 100600 Optimizing EnterpriseOne System Administration Oracle Wed 9565 Optimizing EBS on Exadata Centroid Systems Wed 550 Database-as-a-Service: Enterprise Cloud in Three Simple Steps Oracle Wed 434 Managing Oracle: Expert Panel on Techniques and Best Practices Oracle Partners: Dell, Keste, ROLTA, Pythian Wed 9760 Cloud Computing Directions: Understanding Oracle's Cloud AT&T Wed 817 Right Cloud: Use Oracle Technologies to Avoid False Cloud Visual Integrator Consulting Wed 163 Forgetting something? Standardize your database monitoring environment with Enterprise Manager 11g Johnson Controls Wed 489 Oracle Enterprise Manager 12c: What's Changed? What's New? ROLTA Thu    

    Read the article

  • Thoughts on schemas and schema proliferation

    - by jamiet
    In SQL Server 2005 Microsoft introduced user-schema separation and since then I have seen the use of schemas increase; whereas before I would typically see databases where all objects were in the [dbo] schema I now see databases that have multiple schemas, a database I saw recently had 31 (thirty one) of them. I can’t help but wonder whether this is a good thing or not – clearly 31 is an extreme case but I question whether multiple schemas create more problems than they solve? I have been involved in many discussions that go something like this: Developer #1> “I have a new function to add to the database and I’m not sure which schema to put it in” Developer #2> “What does it do?” Developer #1> “It provides data to a report in Reporting Services” Developer #2> “Ok, so put it in the [reports] schema” Developer #1> “Well I could, but the data will only be used by our Financial reporting folks so shouldn’t I put it in the [financial] schema?” Developer #2> “Maybe, yes” Developer #1> “Mind you, the data is supposed to be used for regulatory reporting to the FSA, should I put it in [regulatory]?” Developer #2> “Err….” You get the idea!!! The more schemas that exist in your database then the more chance that their supposed usages will overlap. I’m left wondering whether the use of schemas is actually necessary. I don’t view really see them as an aid to security because I generally believe that principles should be assigned permissions on objects as-needed on a case-by-case basis (and I have a stock SQL query that deciphers them all for me) so why bother using them at all? I can envisage a use where a database is used to house objects pertaining to many different business functions (which, in itself, is an ambiguous term) and in that circumstance perhaps a schema per business function would be appropriate; hence of late I have been loosely following this edict: If some objects in a database could be moved en masse to another database without the need to remove any foreign key constraints then those objects could legitimately exist in a dedicated schema. I am interested to know what other people’s thoughts are on this. If you would like to share then please do so in the comments. @Jamiet

    Read the article

  • Windows 8 and the future of Silverlight

    - by Laila
    After Steve Ballmer's indiscrete 'MisSpeak' about Windows 8, there has been a lot of speculation about the new operating system. We've now had a few glimpses, such as the demonstration of 'Mosh' at the D9 2011 conference, and the Youtube video, which showed a touch-centric new interface for apps built using HTML5 and JavaScript. This has caused acute anxiety to the programmers who have followed the recommended route of WPF, Silverlight and .NET, but it need not have caused quite so much panic since it was, in fact, just a thin layer to make Windows into an apparently mobile-friendly OS. More worryingly, the press-release from Microsoft was at pains to say that 'Windows 8 apps use the power of HTML5, tapping into the native capabilities of Windows using standard JavaScript and HTML', as if all thought of Silverlight, dominant in WP7, had been jettisoned. Ironically, this brave new 'happening' platform can all be done now in Windows 7 and an iPad, using Adobe Air, so it is hardly cutting-edge; in fact the tile interface had a sort of Retro-Zune Metro UI feel first seen in Media Centre, followed by Windows Phone 7, with any originality leached out of it by the corporate decision-making process. It was kinda weird seeing old Excel running alongside stodgily away amongst all the extreme paragliding videos. The ability to snap and resize concurrent apps might be a novelty on a tablet, but it is hardly so on a PC. It was at that moment that it struck me that here was a spreadsheet application that hadn't even made the leap to the .NET platform. Windows was once again trying to be all things to all men, whereas Apple had carefully separated Mac OS X development from iOS. The acrobatic feat of straddling all mobile and desktop devices with one OS is looking increasingly implausible. There is a world of difference between an operating system that facilitates business procedures and a one that drives a device for playing pop videos and your holiday photos. So where does this leave Silverlight? Pretty much where it was. Windows 8 will support it, and it will continue to be developed, but if these press-releases reflect the thinking within Microsoft, it is no longer seen as the strategic direction. However, Silverlight is still there and there will be a whole new set of developer APIs for building touch-centric apps. Jupiter, for example, is rumoured to involve an App store that provides new, Silverlight based "immersive" applications that are deployed as AppX packages. When the smoke clears, one suspects that the Javascript/HTML5 is merely an alternative development environment for Windows 8 to attract the legions of independent developers outside the .NET culture who are unlikely to ever take a shine to a more serious development environment such as WPF or Silverlight. Cheers, Laila

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • What's Happening in Business Analytics at OpenWorld 2012?

    - by jmorourke
    Oracle OpenWorld 2012 is rapidly approaching on September 30th when we take over the city of San Francisco for five days.  The Business Analytics this year is our strongest ever with over 150 EPM, BI, Analytics and Data Warehousing sessions delivered by Oracle, our customers and partners.  We’ll also have Hands-On Labs, 20 demo pods dedicated to Business Analytics products, and over 30 partners exhibiting their solutions.  So what’s hot in the Business Analytics program at OpenWorld?  Here are some of the “can’t miss” sessions at this year’s conference: The EPM and BI general sessions, led by SVP of Product Development Balaji Yelamanchili will highlight what’s new provide a view into Oracle’s EPM, BI and Analytics strategies.  Both sessions are scheduled on Monday, October 1st. Thursday Keynote:  See More, Act Faster:  Oracle Business Analytics, led by Oracle President Mark Hurd, will provide a view into Oracle’s strategy for Business Analytics, especially engineered systems designed to provide extreme performance for the most rigorous analytic tasks. Superfast Business Intelligence with Oracle Exalytics.  Hear about various business intelligence scenarios in which Oracle Exalytics provides exemplary value—from operational reporting and prepackaged applications to analytics on unstructured data. Turn Insights into Real-Time Actions with Oracle Business Intelligence Mobile.  Learn how Oracle Business Intelligence Mobile enables organizations to deliver relevant information and turn insight into real-time action, no matter where employees are located. Empowering the Business User: Introduction to Oracle Endeca Information Discovery.  Find out how you can find fast answers to the new questions that confront your business every day, while avoiding the confusion and inconsistencies brought about by spreadsheets and desktop tools. Big Data:  The Big Story.  Learn how to harness big data, your existing data, and predictive analytics to make better decisions in an environment of rapid shifts in behavior and instant feedback.  Learn about the technologies that constitute a big data architecture, how to leverage and implement advanced analytics for real-time decisions, and the tools needed to know the unknown. Planning at the Speed of Business with Oracle Exalytics.  Learn how Oracle Hyperion Planning leverages the power of Oracle Exalytics to do planning faster, with more detail and more users than ever. For more details on these and other Business Analytics sessions at OpenWorld, download the Focus On Business Analytics program guide at:  http://www.oracle.com/openworld/focus-on/index.html We look forward to seeing you in San Francisco!

    Read the article

  • How do I prevent Ubuntu gradually slowing down to a stop?

    - by user29165
    I really do not know what is causing the problem of my desktop environment (D.E.) gradually slowing down. It seems to happen in Gnome 3.2, LXDE, XFCE, and KDE. I am running Ubuntu 11.10 and this problem started happening from a fresh install. I have noticed that if I restart Gnome or otherwise re-log-in with any of the D.E.s the speed resets to normal. I don't seem to have and memory leaks that account for it and there are not any processes that are using excessive CPU cycles. Due to the symptoms I am guessing that there is an underlying package that interacts with all D.E.s that is the source of the problem. Just to clarify, in extreme situations, if I let the slowdown continue without a restart of the D.E. my system will finally get to a point where everything, even my clock, freezes. The time over which Gnome slows down (noticeably after 17 hrs) is much less than LXDE, or XFCE but eventually they freeze as well. I didn't mention Unity since I haven't used it enough to verify the problem, but I don't see why it should be different. Just in case this is relevant, I Suspend my computer rather than turn it off. I do this because of the greatly reduced load time and the 17 hrs I stated above is actual up-time, not time where the D.E. is actually active. However, the slow down does not seem to be affected by how long the computer has been in suspend mode. I know that it is possible that this problem is due to an interaction between two or more of the applications that I use and as such it may not be able to be duplicated by others. In the end I am just wondering (a) are other people experiencing this issue and (b) does anyone have some advice on where to start looking for a solution if one is not already known. I am even open to the idea of switching to the beta release of 12.04 if anyone thinks it will solve the problem. Edit: I took a video of my latest freeze that you can watch at http://youtu.be/flKUqUzCmdE, sorry about the audio. Edit: Since that video I checked my memory for errors and tried installing the proprietary video drivers. No memory errors were found and the proprietary video drivers made the display unusable so I had to uninstall them. Any thoughts?

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • Mac Mini server (10.6) behind router with FQDN hostname

    - by thechriskelley
    I have a Mac Mini running Mac OS 10.6.6 Server that will be part of a local network, and a static IP from my ISP. I'd like to set up DNS for the Mini with a FQDN as the hostname (example.com) properly. The Mini is behind a router (Apple Airport Extreme) and is given a private, static IP address. I can't assign it the public static IP directly because it's behind a router with DHCP/NAT for other machines on the local net. My end goal here is for services to resolve to the server properly from outside and inside the local network to users via example.com (and subdomains like mail.example.com, www.example.com), which will point to the public static IP assigned to the router. Will DNS work/resolve properly (for mail services and other subdomains) if it has a private ip address, but the necessary services are forwarded properly through NAT? I'm open to any (hopefully better) suggestions, as my current setup doesn't seem like it's the best way. Currently, more hardware or another public static IP is not possible. With the current setup, it seems as though one static IP is not necessary anyway. Thanks in advance for any insight.

    Read the article

  • New SSD freezing on older motherboard (intel G31)

    - by DJM
    I have an ECS G31T-M motherboard running a Core2 Quad processor, 4gb RAM, Windows 7 32bit, Geforce 9600GT. Bought a Sandisk Extreme III 120GB (SDSSDX-120G-G25) and installed today. I'm outputting this from my 9600GT with included TV-out adapter to Component video (to my HDTV). This motherboard is SATA 2 and from what I can tell, SSDs run on the IDE controller and there is nothing fancy to set up advanced features of SSDs. I've noticed on other forums (but not verifying with ECS) this board does not support AHCI. I have two versions of Windows 7 installed on two drives, the SSD and an old 500G disc drive. When booted from my older 500G HDD, video plays fine on the HDTV. When booting the SSD windows 7 install, I am freezing constantly, as in, video plays OK for a minute, then picture freezes for 1-3 minutes (sometimes as audio continues playing) and returns for 20-30 seconds before doing the same thing again. Other tasks such as basic maneuvering through file folders seems to be no problem. Please help!! Do I need a new system for this thing to work, or could there be other fixes? I updated firmware to R201 to no avail. DJM

    Read the article

  • AMD FX8350 CPU - CoolerMaster Silencio 650 Case - New Water Cooling System

    - by fat_mike
    Lately after a use of 6 months of my AMD FX8350 CPU I'm experiencing high temperatures and loud noise coming from the CPU fan(I set that in order to keep it cooler). I decided to replace the stock fan with a water cooling system in order to keep my CPU quite and cool and add one or two more case fans too. Here is my case's airflow diagram: http://www.coolermaster.com/microsite/silencio_650/Airflow.html My configuration now is: 2x120mm intake front(stock with case) 1x120mm exhaust rear(stock with case) 1 CPU stock I'm planning to buy Corsair Hydro Series H100i(www.corsair.com/en-us/hydro-series-h100i-extreme-performance-liquid-cpu-cooler) and place the radiator in the front of my case(intake) and add an 120mm bottom intake and/or an 140mm top exhaust fan. My CPU lies near the top of the MO. Is it a good practice to have a water-cooling system that takes air in? As you can see here the front of the case is made of aluminum. Can the fresh air go in? Does it even fit? If not, is it wiser to get Corsair Hydro Series H80i (www.corsair.com/en-us/hydro-series-h80i-high-performance-liquid-cpu-cooler) and place the radiator on top of my case(exhaust) and keep the front 2x120mm stock and add one more as intake on bottom. If you have any other idea let me know. Thank you. EDIT: The CPU fan running ~3000rpm and temp is around 40~43C on idle and save energy. When temp is going over 55C when running multiple programs and servers on localhost(tomcat, wamp) rpm is around 5500 and loud! I'm running Win8.1 CPU not overclocked PS: Due to my reputation i couldn't post the links that was necessary. I will edit ASAP.

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >