Search Results

Search found 4109 results on 165 pages for 'plan'.

Page 23/165 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • DIVIDE vs division operator in #dax

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing: IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () ) you can write: DIVIDE ( Sales[Amount], Sales[Quantity] ) There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example. Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now! Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

    Read the article

  • Securing Back End API for Mobile Applications

    - by El Guapo
    I have an application that I am writing for both iOS and Android; this application will be served by a ReSTFUL API running on a cluster of servers on "the internets". I am curious how the rest of the world is going about securing their APIs so only specific applications running on iOS or Android can use these APIs. I could go the same route as other OAuth providers by providing a key/secret combination (2-legged OAuth), however, what do I do if I ever have to change these keys??? Do I create a new key/secret for every person that downloads the app??? The application is a social-based game that will allow the user to interact with other "participants" in the game based on location, achievements, etc. The API will provide the following functions: -Questions, Quests, etc -Profile Management -User Interaction -Possible Social Interaction Once the app gains traction I plan on opening up the API ala Facebook, Twitter, etc. Which is easy enough, I plan on implementing an OAuth Server and whatnot. However, I want to make sure, during this phase, that only people who are using the application can access and use the API.

    Read the article

  • How much detail is in a good UI regression test?

    - by GlenPeterson
    We use a detailed step-by-step user-interface regression test for our commercial web application. It has a "backbone" test for the most used / most important parts of the system, with optional tests for specific areas of functionality. Using this plan has definitely helped us ensure high quality software. But, having very specific tests can be counter-productive. The tester concentrates on following the test and will completely miss usability issues, or not notice fairly obvious problems such as the bottom part of a page that is missing. By contrast, some of the best UI testing happens when building a demo of a new feature. I often do my own best testing by pretending to demonstrate the system to an imaginary prospect. Yet when I tell the testers, "Just demonstrate the system to yourself" they don't cover nearly as much functionality as they do with a detailed point-by-point test. I'm repeatedly asked to provide more and more detail in the test plan so that a new untrained tester can test with it without asking any questions. Yet details seem to be counter-productive. How much detail do you put in a regression test to make it effective? What techniques make the tester to focus more on the system than on checking off items on the test?

    Read the article

  • ArchBeat Link-o-Rama for November 15, 2012

    - by Bob Rhubart
    WLST Starting and Stopping a WebLogic Environment | Rene van Wijk Oracle ACE Rene van Wijk explores how to start a server with as little input as possible. Developing and Enforcing a BYOD Policy | Darin Pendergraft Darin Pendergraft's post includes links to a recent Mobile Access Policy Survey by SANS as well as registration information for a Nov 15 webcast featuring security expert Tony DeLaGrange from Secure Ideas, SANS instructor, attorney and technology law expert Ben Wright, and Oracle IDM product manager Lee Howarth. Cloud Integration White Paper Now Available |Bruce Tierney Bruce Tierney shares an overview of Cloud Integration - A Comprehensive Solution, a new white paper he co-authored with David Baum, Rajesh Raheja, Bruce Tierney, and Vijay Pawar. My iPad & This Cloud Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why the Cloud is making it possible for him to use his iPad for tasks previously relegated to his laptop, and why this same scenario is likely to play out for a great many people. 3 steps to a cloud database strategy that works | InfoWorld "Every day, cloud-based databases add more features, decrease in cost, and become better at handling prime-time business," says InfoWorld blogger David Linthicum. "However, enterprise IT is reluctant to move data to public clouds, citing the tried-and-true excuses of security, privacy, and compliance. Although some have valid points, their reasons often boil down to 'I don't wanna.'" Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available | Elke Phelps "The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server," says Elke Phelps. "You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install)." Thought for the Day "A good plan executed today always beats a perfect plan executed tomorrow." — George S. Patton (November 11, 1885 - December 21, 1945) Source: SoftwareQuotes.com

    Read the article

  • multi user web game with scheduled processing?

    - by Rooq
    I have an idea for a game which I am in the process of designing, but I am struggling to establish if the way I plan to implement it is possible. The game is a text based sports management simulation. This will require players to take certain actions through a web browser which will interact with a database - adding/updating and selecting. Most of the code required to be executed at this point will be fairly straightforward. The main processing will take place by applications which are scheduled to run on the server at certain times. These apps will process transactions added by the players and also perform some automatic processing based on the game date. My plan was to use an SQL server database (at last count I require about 20 tables) and VB.net for all the coding (coming from a mainframe programming background this language is the simplist for me to get to grips with). I will also need a scheduling tool on the server. Can anyone tell me if what I am planning is feasible before I dive into the actual coding stage of my project?

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Oracle and Cavium to work together on Java SE 8 on 64-bit ARMv8

    - by Henrik Stahl
    We have been working for some time on a standard Oracle JDK 8 port to the upcoming introduction of 64-bit servers based on the new ARMv8 micro architecture. At ARM TechCon 2013 in Santa Clara, California, we announced a roadmap with an expected GA in 2015. This project is going very well and is ahead of schedule. We will soon be at the point where we will make binaries available outside of Oracle - first in a managed beta program with select customers/partners, and sometime during the fall of 2014 as a public early access program. Unless something changes, we are looking at a early 2015 GA. We should be able to share a detailed ramp down and GA plan by JavaOne 2014. One of the things we (obviously) need to produce a high-quality port is hardware for development and QA. We are therefore happy to announce that we will be collaborating with Cavium on this project. Cavium has been a supporter of the Java ecosystem for a long time and we have numerous joint customers running various Java versions on Cavium MIPS and ARM-based hardware. Cavium has now agreed to provide us with development hardware and engineering resources so that we can certify and optimize the initial Oracle JDK 8 release on Cavium's ThunderX hardware. This is expected to improve quality and performance of JDK 8 on ARMv8 in general, as well as on Cavium's hardware. For more information: Cavium announcement on the ThunderX product family Cavium announcement on Oracle collaboration As a reminder, we plan to release the Oracle JDK 8 port to 64-bit ARMv8 under the royalty-free (for general purpose servers etc) Binary Code License, but we have no current plans to open source it.

    Read the article

  • SQLAuthority News – #SQLPASS 2012 Schedule – Where can You Find Me

    - by pinaldave
    Yesterday I wrote about my memory lane with SQLPASS. It has been a fantastic experience and I am very confident that this year the same excellent experience is going to be repeated. Before I start for #SQLPASS every year, I plan where I want to be and what I will be doing. As I travel from India to attend this event (22+ hours flying time and door to door travel time around 36 hours), it is very crucial that I plan things in advance. This year here is my quick note where I will be during the SQLPASS event. If you can stop with me, I would like to meet you, shake your hand and will archive memories as a photograph. Tuesday, November 6, 2012 6:30pm-8:00pm PASS Summit 2012 Welcome Reception Wednesday, November 7, 2012 12pm-1pm – Book Signing at Exhibit Hall Joes Pros booth#117 (FREE BOOK) 5:30pm-6:30pm – Idera Reception at Fox Sports Grill 7pm-8pm - Embarcadero Booth Book Signing (FREE BOOK) 8pm onwards – Exhibitor Reception Thursday, November 8, 2012 12pm-1pm - Embarcadero Booth Book Signing (FREE BOOK) 7pm-10pm - Community Appreciation Party Friday, November 9, 2012 12pm-1pm - Joes 2 Pros Book Signing at Exhibit Hall Joes Pros booth#117 11:30pm-1pm - Birds of a Feather Luncheon Rest of the Time! Exhibition Hall Joes 2 Pros Booth #117. Stop by for the goodies! Lots of people have already sent me email asking if we can meet for a cup of coffee to discuss SQL. Absolutely! I like cafe mocha with skim milk and whip cream and I do not get tired of it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • Doing a passable 4X game AI

    - by Extrakun
    I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions; spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player - Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score; and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A*

    Read the article

  • Technical Integration Roadmap for OBI11g and Oracle Hyperion EPM System

    - by Mike.Hallett(at)Oracle-BI&EPM
    There is an excellent technical whitepaper on the integration roadmap for Oracle business intelligence enterprise edition and the Oracle Hyperion enterprise performance management system  (download at this link).  This document lists the integration points among all current releases of Oracle BI EE with EPM System releases: with live links to other relevant documentation also provided. You may also be interested in the overall Hyperion EPM System Documentation Resources which can be found from the Doc Portal. And, there are two new tools for EPM @ MyOracleSupport  {this needs your oracle logon} : Cumulative Feature Overview Tool This new tool offers a simple way to determine the features developed between releases to assist you in your upgrade implementations. The tool helps you to plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that are added between your current and target releases. With the Cumulative Feature Overview Tool, you can quickly and easily find information about new features for each EPM System product. Defects Fixed Finder Tool This new tool provides an efficient way to review the defects fixed in patch set updates, patch set exceptions, and patch sets for major releases, starting with Release 11.1.1. The tool helps you plan patch implementations by providing concise descriptions of defects fixed after your current release. The Defects Fixed Finder enables you to easily find information about defects fixed for each EPM System product.

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Changing website Url - Am I making an SEO mistake

    - by Denis
    I have a webiste with a .com domain that is a year old. The business is a shop based in Ireland and I have purchased a .ie domain. I plan to move the website over to the new domain, SEO Good or Bad idea? Old Url - SmythsOfTerenure.com | New Url - SmythsComputerRepair.ie (I am using Fake names and fictional business in the example Url's) The new domain has my main keyword in it. The old domain has my family name and business location (city district) It currently ranks high for lots of relevant keywords in Google with low traffic and low competition. Current website traffic is about 80 session per week. 80% of that traffic is Organic from Google. I am changing domain in an attempt to help SEO long term by having a CC TLD (.ie rather than .com) and having my main Keyword in the domain. I plan to do 301 re-directs from old to new and update GW Tools and G Analytics but am I making a mistake changing it at all as I know rankings may fall in the sort term. Homepage PR=0 and very few inbound links. Should I just leave it on the old domain? Or after a few months should I be back up ranking as well as I am now?

    Read the article

  • FREE goodies if you are a UK based software house already live on the Windows Azure Platform

    - by Eric Nelson
    In the UK we have seen some fantastic take up around the Windows Azure Platform and we have lined up some great stuff in 2011 to help companies fully exploit the Cloud – but we need you to tell us what you are up to! Once you tell us about your plans around Windows Azure, you will get access to FREE benefits including email based developer support and free monthly allowance of Windows Azure, SQL Azure and AppFabric from Jan 2011 – and more! (This offer is referred to as Cloud Essentials and is explained here) And… we will be able to plan the right amount of activity to continue to help early adopters through 2011. Step 1: Sign up your company to Microsoft Platform Ready (you will need a windows live id to do this) Step 2: Add your applications For each application, state your intention around Windows Azure (and SQL etc if you so wish) Step 3: Verify your application works on the Windows Azure Platform Step 4 (Optional): Test your application works on the Windows Azure Platform Download the FREE test tool. Test your application with it and upload the successful results. Step 5: Revisit the MPR site in early January to get details of Cloud Essentials and other benefits P.S. You might want some background on the “fantastic take up” bit: We helped over 3000 UK companies deploy test applications during the beta phase of Windows Azure We directly trained over 1000 UK developers during 2010 We already have over 100 UK applications profiled on the Microsoft Platform Ready site And in a recent survey of UK ISVs you all look pretty excited around Cloud – 42% already offer their solution on the Cloud or plan to.

    Read the article

  • Upcoming events : OBUG Connect Conference 2012

    - by Maria Colgan
    The Oracle Benelux User Group (OBUG) have given me an amazing opportunity to present a one day Optimizer workshop at their annual Connect Conference in Maastricht on April 24th. The workshop will run as one of the parallel tracks at the conference and consists of three 45 minute sessions. Each session can be attended stand alone but they will build on each other to allow someone new to the Oracle Optimizer or SQL tuning to come away from the conference with a better understanding of how the Optimizer works and what techniques they should deploy to tune their SQL. Below is a brief description of each of the sessions Session 7 - 11:30 am Oracle Optimizer: Understanding Optimizer StatisticsThe workshop opens with a discussion on Optimizer statistics and the features introduced in Oracle Database 11g to improve the quality and efficiency of statistics-gathering. The session will also provide strategies for managing statistics in various database environments. Session 27 -  14:30 pm Oracle Optimizer: Explain the Explain PlanThe workshop will continue with a detailed examination of the different aspects of an execution plan, from selectivity to parallel execution, and explains what information you should be gleaning from the plan. Session 47 -  15:45 pm Top Tips to get Optimal Execution Plans Finally I will show you how to identify and resolving the most common SQL execution performance problems, such as poor cardinality estimations, bind peeking issues, and selecting the wrong access method.   Hopefully I will see you there! +Maria Colgan

    Read the article

  • Is this an acceptable approach to undo/redo in Python?

    - by Codemonkey
    I'm making an application (wxPython) to process some data from Excel documents. I want the user to be able to undo and redo actions, even gigantic actions like processing the contents of 10 000 cells simultaneously. I Googled the topic, and all the solutions I could find involves a lot of black magic or is overly complicated. Here is how I imagine my simple undo/redo scheme. I write two classes - one called ActionStack and an abstract one called Action. Every "undoable" operation must be a subclass of Action and define the methods do and undo. The Action subclass is passed the instance of the "document", or data model, and is responsible for committing the operation and remembering how to undo the change. Now, every document is associated with an instance of the ActionStack. The ActionStack maintains a stack of actions (surprise!). Every time actions are undone and new actions are performed, all undone actions are removed for ever. The ActionStack will also automatically remove the oldest Action when the stack reaches the configurable maximum amount. I imagine the workflow would produce code looking something like this: class TableDocument(object): def __init__(self, table): self.table = table self.action_stack = ActionStack(history_limit=50) # ... def delete_cells(self, cells): self.action_stack.push( DeleteAction(self, cells) ) def add_column(self, index, name=''): self.action_stack.push( AddColumnAction(self, index, name) ) # ... def undo(self, count=1): self.action_stack.undo(count) def redo(self, count=1): self.action_stack.redo(count) Given that none of the methods I've found are this simple, I thought I'd get the experts' opinion before I go ahead with this plan. More specifically, what I'm wondering about is - are there any glaring holes in this plan that I'm not seeing?

    Read the article

  • Ubuntu 13.04 alongside Windows 8 - How to partition from Windows

    - by mengelkoch
    I plan to install Ubuntu 13.04 alongside Windows 8, and I'm looking for a CLEAR answer on how to conduct partitioning appropriately. I'm very new to all of this so a thorough explanation with minimal jargon would be great. I have an Acer Aspire M5 x64 with 6G RAM. I think I already figured out how to deal with the fast startup, UEFI and SecureBoot issues (I disabled fast startup and disabled Secure Boot). I am able to boot into Ubuntu from a LiveUSB, and I think I am ready to install Ubuntu. Note - despite some advice found here, I do have to disable SecureBoot to boot 13.04 from my LiveUSB. From what I have read here, it seems that I should (at least at first) create the partitions from WITHIN Windows 8, not from the LiveUSB, to avoid reported problems. I have run compmgmt.msc and I see the existing partitions. I see the following: Disk 0: 400 MB Recovery; 300 MB EFI System; Acer (C:) 444.95 GB (Boot, Page File, Crash Dump, Primary Partition); 20 GB Recovery Disk 1: 3.74 GB Primary Partition; 14.90 GB Primary Partition I gather I need to create a mounting point '/' Partition (??), a swap partition, and a home partition. Please explain what these are, how big they should be, how I create them from Windows Disk Management, and anything else I need to know. Eventually, I plan to fully replace Windows 8 with Ubuntu, but for now I want to run alongside Windows 8 and not screw things up. I don't have any critical files saved on this computer yet. Thanks.

    Read the article

  • JEOS

    - by john.graves(at)oracle.com
    JEOS stands for Just Enough Operating System.  It is  great environment for building virtual machines without all the clutter of a windowing system, games, office products, etc.  It is from Ubuntu and you install it using the Ubuntu server install, but rather than picking a standard install, press F4 and choose “Install a minimal system.” Note: The “Install a minimal virtual machine” is specific to VMWare and I plan to use VirtualBox. Be sure to include Open SSH in the install so that it installs sshd. *** Also, if you plan to install XE, you’ll need to modify the partitions to have a larger swap space (at least 1.5 G). *** Once the install is done, I find it useful to install a few other items. Update Ubuntu apt-get update Install some other tools apt-get openjdk-6-jre Yes, java will be included in any of the WebLogic installs, but I need this one if I want to do remote display (for config wizards, etc). apt-get gcc Some apps require to rebuild the kernel modules, so you’ll need a basic compiler. Install guest additions (Choose the VirtualBox Devices->Install Guest Additions…” option.  This sets up a /dev/cdrom or /dev/cdrom1.  You’ll need to manually mount this temporarily: sudo mount /dev/cdrom /mnt Then run the linux .bin file. Update nofile limits.  Most java apps fail with the standard ubuntu settings: edit /etc/security/limits.conf and add these lines at the end: *     soft nofile 65535 *     hard nofile 65535 root  soft nofile 65535 root  hard nofile 65535 These numbers are very high and I wouldn’t do this on a production system, but for this environment it is fin. To get rid of the annoying piix error on boot, add the following line to the /etc/modprobe.d/blacklist.conf file blacklist i2c_piix4

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • Getting started on Large Projects

    - by Mercfh
    So I just graduated from my College with a B.S. in Comp. Science (although it was a good school, we're the only accredited CS department in our state.....for w/e that means lol) I feel like im a decent programmer, not amazing....but not terrible. Anyways I got my first job about 2 weeks ago, it's a pretty entry level job: firmware development/tester (I know I know people look down on testers...but I gotta start somewhere). Anyways there isn't a whole lot of coding to be had right now (mostly simple stuff) but here soon I have the option of helping out with development (which is what I want to do) Thing is....I have NEVER worked on a huge project. I mean in school sure we had "group" projects but nothing really big. So I'm not super familiar with HUGE classes and such (main language was C++)....Is this something I'll just get used to with time? Some fellow students were used to that with internships and such...but I never got that chance. My job was mostly a "one man job" kinda thing. Mostly little things. Plus in class we never did huge projects anyways. So how do you guys I guess "plan" out these things? Do you use a whiteboard and plan out classes and such....or what. Also...another worry of mine is that I have to use google......ALOT for examples of code, because sometimes I just don't get how something works. Is this normal? It makes me feel sorta.....stupid I guess. I mean "technically" i've had 4-5 years coding experience......but it really only feels like I had 2 years of REAL experience. If that makes any sense? Thanks

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >