Search Results

Search found 93612 results on 3745 pages for 'inquisitive one'.

Page 350/3745 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • OWB – How to update OWB after Database Cloning

    - by David Allan
    One of the most commonly asked questions led to one of the most commonly accessed support documents (strange that) for OWB is the document describing how to update the OWB repository details after cloning the Oracle database. The document in the Oracle support site has id 434272.1, and is titled 'How To Update Warehouse Builder After A Database Cloning (Doc ID 434272.1)'. This post is really for me to remember the document id;-)

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • planning the same app for both OSX and iOS

    - by P5music
    I would like to ask which is the best strategy for creating an application that will be developed both on Mac and iPad, so to make minumum effort to port it from one platform to the other, starting from iPad, for example, but rather trying to make both at the same time. The application, in fact, would be an iPad-style one on the Mac too. How should I have to plan the project? Which are the main tricks to easily get the goal?

    Read the article

  • 2D game editor with SDK or open format (Windows)

    - by Edward83
    I need 2d editor (Windows) for game like rpg. Mostly important features for me: Load tiles as classes with attributes, for example "tile1 with coordinates [25,30] is object of class FlyingMonster with speed=1.0f"; Export map to my own format (SDK) or open format which I can convert to my own; As good extension feature will be multi-tile brush. I wanna to choose one or many tiles into one brush and spread it on canvas.

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • Part 1: What are EBS Customizations?

    - by volker.eckardt(at)oracle.com
    Everything what is not shipped as Oracle standard may be called customization. And very often we differentiate between setup and customization, although setup can also be required when working with customizations.This highlights one of the first challenges, because someone needs to track setup brought over with customizations and this needs to be synchronized with the (standard) setup done manually. This is not only a tracking issue, but also a documentation issue. I will cover this in one of the following blogs in more detail.But back to the topic itself. Mainly our code pieces (java, pl/sql, sql, shell scripts), custom objects (tables, views, packages etc.) and application objects (concurrent programs, lookups, forms, reports, OAF pages etc.) are treated as customizations. In general we define two types: customization by extension and customization by modification. For sure we like to minimize standard code modifications, but sometimes it is just not possible to provide a certain functionality without doing it.Keep in mind that the EBS provides a number of alternatives for modifications, just to mention some:Files in file system    add your custom top before the standard top to the pathBI Publisher Report    add a custom layout and disable the standard layout, automatically yours will be taken.Form /OAF Change    use personalization or substitutionUsing such techniques you are on the safe site regarding standard patches, but for sure a retest is always required!Many customizations are growing over the time, initially it was just one file, but in between we have 5, 10 or 15 files in our customization pack. The more files you have, the more important is the installation order.Last but not least also personalization's are treated as customizations, although you may not use any deployment pack to transfer such personalisation's (but you can). For OAF personalization's you can use iSetup, I have also enabled iSetup to allow Forms personalizations to transport.Interfaces and conversion objects are quite often also categorized as customizations and I promote this decision. Your development standards are related to all these kinds of custom code whether we are exchanging data with users (via form or report) or with other systems (via inbound or outbound interface).To cover all these types of customizations two acronyms have been defined: RICE and CEMLI.RICE = Reports, Interfaces, Conversions, and ExtensionsCEMLI = Customization, Extension, Modification, Localization, IntegrationThe word CEMLI has been introduced by Oracle On Demand and is used within Oracle projects quite often, but also RICE is well known as acronym.It doesn't matter which acronym you are using, the main task here is to classify and categorize your customizations to allow everyone to understand when you talk about RICE- 211, CEMLI XXFI_BAST or XXOM_RPT_030.Side note: Such references are not automatically objects prefixes, but they are often used as such. I plan also to address this point in one other blog.Thank you!Volker

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – How to easily work with Database Diagrams

    - by Pinal Dave
    Databases are very widely used in the modern world. Regardless of the complexity of a database, each one requires in depth designing. To practice along please Download dbForge Studio now.  The right methodology of designing a database is based on the foundations of data normalization, according to which we should first define database’s key elements – entities. Afterwards the attributes of entities and relations between them are determined. There is a strong opinion that the process of database designing should start with a pencil and a blank sheet of paper. This might look old-fashioned nowadays, because SQL Server provides a much wider functionality for designing databases – Database Diagrams. When using SSMS for working with Database Diagrams I realized two things – on the one hand, visualization of a scheme allows designing a database more efficiently; on the other – when it came to creating a big scheme, some difficulties occurred when designing with SSMS. The alternatives haven’t taken long to wait and dbForge Studio for SQL Server is one of them. Its functions offer more advantages for working with Database Diagrams. For example, unlike SSMS, dbForge Studio supports an opportunity to drag-and-drop several tables at once from the Database Explorer. This is my opinion but personally I find this option very useful. Another great thing is that a diagram can be saved as both a graphic file and a special XML file, which in case of identical environment can be easily opened on the other server for continuing the work. During working with dbForge Studio it turned out that it offers a wide set of elements to operate with on the diagram. Noteworthy among such elements are containers which allow aggregating diagram objects into thematic groups. Moreover, you can even place an image directly on the diagram if the scheme design is based on a standard template. Each of the development environments has a different approach to storing a diagram (for example, SSMS stores them on a server-side, whereas dbForge Studio – in a local file). I haven’t found yet an ability to convert existing diagrams from SSMS to dbForge Studio. However I hope Devart developers will implement this feature in one of the following releases. All in all, editing Database Diagrams through dbForge Studio was a nice experience and allowed speeding-up the common database designing tasks. Download dbForge Studio now. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Undocumented Query Plans: Equality Comparisons

    - by Paul White
    The diagram below shows two data sets, with differences highlighted: To find changed rows using TSQL, we might write a query like this: The logic is clear: join rows from the two sets together on the primary key column, and return rows where a change has occurred in one or more data columns.  Unfortunately, this query only finds one of the expected four rows: The problem, of course, is that our query does not correctly handle NULLs.  The ‘not equal to’ operators <> and != do not evaluate...(read more)

    Read the article

  • Google Analytics - Showing multiple site stats at once

    - by John
    Is there a way in google analytics to add multiple sites to and show all the stats together? So like the graphs and total visits/unique hits all combined for all the sites added to the google analytics account? For example if I have: site1.com site2.com site3.com Under one google analytics account, is there a way in google analytics tool to merge them together so I can see a sum of all traffic in one report?

    Read the article

  • What emulation mode for an epson printer is best?

    - by deamon
    I'm looking for a color duplex printer for Ubuntu. My favourites are Epson B-310N and Brother HL 4050CDN. The latter should be work with linux, but I'd prefer the first one, because printing costs are cheaper. The epson printer has "Epson ESC/P Raster", "PCL3", and "Epson ESC/P2" emulation. Can I use the printer with one of those emulations under linux? Were there any restrictions (like unusable duplex)?

    Read the article

  • Warp GameObject Size When Entering/Leaving Area

    - by Julian
    Below I have an image describing the desired functionality I am going for. Let's say you control a square and when you move this square into a given area, any part of your rigidbody/model inside of the area will be magnified upon entering and shrunk upon leaving. So now you more or less are made up of two rectangles, one small and one large. What would be an elegant approach towards achieving this effect?

    Read the article

  • 3rd Party Tools: dbForge Studio for SQL Server

    - by Greg Low
    I've been taking a look at some of the 3rd party tools for SQL Server. Today, I looked at DBForge Studio for SQL Server from the team at DevArt. Installation was smooth. I did find it odd that it defaults to SQL authentication, not to Windows but either works fine. I like the way they have followed the SQL Server Management Studio visual layout. That will make the product familiar to existing SQL Server Management Studio users. I was keen to see what the database diagram tools are like. I found that the layouts generated where quite good, and certainly superior to the built-in SQL Server ones in SSMS. I didn't find any easy way to just add all tables to the diagram though. (That might just be me). One thing I did like was that it doesn't get confused when you have role playing dimensions. Multiple foreign key relationships between two tables display sensibly, unlike with the standard SQL Server version. It was pleasing to see a printing option in the diagramming tool. I found the database comparison tool worked quite well. There are a few UI things that surprised me (like when you add a new connection to a database, it doesn't select the one you just added by default) but generally it just worked as advertised, and the code that was generated looked ok. I used the SQL query editor and found the code formatting to be quite fast and while I didn't mind the style that it used by default, it wasn't obvious to me how to change the format. In Tools/Options I found things that talked about Profiles but I wasn't sure if that's what I needed. The help file pointed me in the right direction and I created a new profile. It's a bit odd that when you create a new profile, that it doesn't put you straight into editing the profile. At first I didn't know what I'd done. But as soon as I chose to edit it, I found that a very good range of options were available. When entering SQL code, the code completion options are quick but even though they are quite complete, one of the real challenges is in making them useful. Note in the following that while the options shown are correct, none are actually helpful: The Query Profiler seemed to work quite well. I keep wondering when the version supplied with SQL Server will ever have options like finding the most expensive operators, etc. Now that it's deprecated, perhaps never but it's great to see the third party options like this one and like SQL Sentry's Plan Explorer having this functionality. I didn't do much with the reporting options as I use SQL Server Reporting Services. Overall, I was quite impressed with this product and given they have a free trial available, I think it's worth your time taking a look at it.

    Read the article

  • L'alternative d'Apple au Flash s'appelle Gianduia, écrite en JavaScript elle s'appuierait sur Cocoa

    Mise à jour du 10/05/10 L'alternative d'Apple au Flash s'appelle Gianduia Elle est écrite en JavaScript Critiquer c'est bien. Proposer c'est mieux. C'est ce que Apple serait sur le point de faire avec sa propre solution pour remplacer Flash (et par la même occasion Silverlight, le concurrent de chez Microsoft). Baptisée Gianduia, cette technologie RIA aurait déjà été testée par Apple dans plusieurs de ses services de distribution comme le programme One-to-One, (formation individuelle dans les magasins de la marque), le système de réservation de l'iPhone ou les applications des Concierges (ses vendeurs spécialisés).

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • The Art of SEO Writing

    Website owners around the world have one thing they all want when it come to their respective websites: web traffic. And how do these website owners improve this? By using SEO techniques. And to do well in SEO, one must understand how to do quality SEO writing.

    Read the article

  • Is a coding standard even needed any more?

    - by SomeKittens
    I know that it's been proven that a coding standard helps enormously. However, there are many different tools and IDEs that will format to whatever standard the programmer prefers. So long as the code's neat/commented (and not a spaghetti mess), I don't see the need for a coding standard. Are there any arguments for the development of a coding standard (we don't have one, but I was looking into creating one)?

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • View Link inConsistency

    - by Abhishek Dwivedi
    What is View Link Consistency? When multiple instances (say VO1, VO2, VO3 etc) of an EO-based VO are based on the same underlying EO, a new row created in one of these VO instances (say VO1)can be automatically added (without re-query) to the row sets of the others (VO2, VO3 etc ). This capability is known as the view link consistency. This feature works for any VO for which it is enabled, regardless of whether they are involved in a view link or not. What causes View Link inConsistency? Unless jbo.viewlink.consistent  is disabled for this VO (or globally), or setAssociationConsistent(false) is applied, any of the following can cause View Link inConsistency.  1. setWhereClause 2. Unreferenced secondary EO 3. findByViewCriteria() 4. Using view link accessor row set Why does this happen - View Link inConsistency? Well, there can be one of the following reasons. a. In case of 1 & 2, the view link consistency flag is disabled on that view object. b. As far as 3 is concerned, findByViewCriteria is used to retrieve a new row set to process programmatically without changing the contents of the default row set. In this case, unlike previous cases, the view link consistency flag is not disabled, meaning that the changes in the default row set would be reflected in the new row set.  However, the opposite doesn't hold true. For instance, if a row is deleted from this new row set, the corresponding row in the default row set does not get deleted. In one of my features, which involved deletion of row(s), I resolved the view link inconsistency issue by replacing findByViewCriteria by applyViewCriteria. b. For 4, it's similar to 3 - whenever a view link accessor row set is retrieved, a new row set is created. Now, creating new row set does not mean re-executing the query each time, only creating a new instance of a RowSet object with its default iterator reset to the "slot" before the first row. Also, please note that this new row set always originates from an internally created view object instance, not one you that added to the data model. This internal view object instance is created as needed and added with a system-defined name to the root application module. Anyway, the very reason a distinct, internally-created view object instance is used is to guarantee that it remains unaffected by developer-related changes to their own view objects instances in the data model.

    Read the article

  • openfeint or gamecenter?

    - by Gajet
    which one has more potential customers, easier API, and wider feature list? i'm going to develop implement one of those two for highscore recording in my game which ones gives more advantages? and by the way I might be going to port my game to android, so if you know any thing that can help me not to rewrite my code (for example a C++ wrapper for both of them) that would mean a greate plus for openfeint in my point of veiw.

    Read the article

  • Hosting multiple low traffic websites on ec2

    - by Niko Sams
    We have like 30 websites with almost no traffic (<~10 visits / day) which are currently hosted on a dedicated server. We are evaluating hosting on Amazon EC2 however I'm not sure how to do that properly. One (micro) instance per website is too expensive ~10 websites on one instance (using apache virtual hosts) make auto scaling impossible (or at least difficult) Or is cloud computing not suitable for such a usecase?

    Read the article

  • Javascript SDK on Facebook

    - by Eamonn Fox
    I am trying to use the Javascript SDK for Facebook but I keep getting the message : Given URL is not permitted by the application configuration.: One or more of the given URLs is not allowed by the App's settings. It must match the Website URL or Canvas URL, or the domain must be a subdomain of one of the App's domains but I have copied and pasted my canvas URL from the settings section. Anyone any ideas whats up?

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • Agile and different facet of software development

    - by arjun
    It is said that the Kanban methodology is suited for software maintenance and support areas, whereas Scrum is more suited for new product development. No process or methods are complete. Using the right one will help you succeed, but they will not guarantee success. Which agile approach is best suited for a project which is basically a re-platforming from one technology to another (say from Java to .NET).

    Read the article

  • Restore Gene : Automating SQL Server Database Restores

    Restore Gene is a simple 2-script framework, one PowerShell script and one SQL stored procedure, which will speed up the production of restore scripts for manual disaster recovery, as well help automate log shipping. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >