Search Results

Search found 12404 results on 497 pages for 'native types'.

Page 109/497 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Help to organize game cycle in Java

    - by ASIO22
    I'm pretty new here (as though to a game development). So here's my question. I'm trying to organize a really simple game cycle in my public static main() as follows: Scanner sc = new Scanner(System.in); //Running the game cycle boolean flag=true; while (flag) { int action; System.out.println("Type your action please:"); System.out.println("0: Exit app"); try { action = sc.nextInt(); switch (action) { case 0: flag=false; break; case 1: break; } } catch (InputMismatchException ex) { System.out.println(ex.getClass() + "\n" + "Please type a correct input\n"); //action = sc.nextInt(); continue; } What's wrong with this cycle: I want to catch an exception when user types text instead of number, show a message, warning user, and the continue game cycle, read user input etc. But instead of that, when users types wrong data, it goes into a eternal cycle without even prompting user. What I did wrong?

    Read the article

  • Purpose of "new" keyword

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • What kinds of demos are good to make for a software engineer job

    - by user23012
    I have created my cv site and sent out my demos for a while now, but most of my demos are either from my course or games related since my course was a games programming course, I was wondering what kind of demos are good to show off my skills in programming in general. These are what i already have Pennies:just a simple game first coursework i did. Compiler:coursework for compiler writing module Pongout: basic a pong game in 68k using colour detection Snake: snake in 68k same thing as the pong Game Cube Maze: gamecube work BeatmyBot: basic Ai Basic plat-former game: 2d game with different types of collision Turing Lambda Simulation: my dissertation Turing machine simulated in Miranda. alpha and Beta reduction,and SKI calculus simulated in the Turing machine. What I am asking here is what kind of demos are good to add or have, i have been looking and have hit a tough spot I cant think of anything to make more than games. so for a general graduate software engineer what types would be good examples? EDIT: since responding to the comments bellow well for what languages well my main one would be C++, followed by Java, Erlang and abit of Haskell

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

  • Prevent Looping and Inefficient Rule Executions by C2B2

    - by JuergenKress
    This recipe, taken from the recently published Oracle SOA Suite 11g Performance Cookbook gives guidance on how to avoid rule executions that will loop, potentially indefinitely! We’ll use an inbound XML fact and a local RL fact as an example. Getting ready You’ll need access to a SOA composite containing an Oracle Business Rules component in JDeveloper to apply this recipe. We’ll assume you have an XSD schema with an input type RequestInput containing input and bonus String types, and output String value called output in a type ResponseOutput. These aren’t efficient but serve as an example. We’ll step through adding a rule to a composite and creating an RL fact. How to do it... Open a SOA composite. Right click on the Project and select Business Rules (Service Components), use the search box if it is not immediately available. Give the rule a name and click the green plus icon to add the RequestInput to the input and ResponseOutput to the output types. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,looping,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SSIS Send Mail Task and ForceExecutionValue Error

    - by Kevin Shyr
    I tried to use the "ForcedExecutionValue" on several Send Mail Tasks and log the execution into a ExecValueVariable so that at the end of the package I can log into a table to say whether the data check is successful or not (by determine whether an email was sent out) I set up a Boolean variable that is accessible at the package level, then set up my Send Mail Task as the screenshot below with Boolean as my ForcedExecutionValueType.  When I run the package, I got the error described below. Just to make sure this is not another issue of SSIS having with Boolean type ( you also can't set variable value from xp_cmdshell of type Boolean), I used variables of types String, Int32, DateTime with the corresponding ForcedExecutionValueType.  The only way to get around this error, was to set my variable to type Object, but then when you try to get the value out later, the Object is null. I didn't spend enough time on this to see whether it's really a bug in SSIS or not, or is this just how Send Mail Task works.  Just want to log the error and will circle back on this later to narrow down the issue some more.  In the meantime, please share if you have run into the same problem.  The current workaround is to attach a script task at the end. Also, need to note 2 existing limitation: Data check needs to be done serially because every check needs to be inner join to a master table.  The master table has all the data in a single XML column and hence need to be retrieved with XQuery (a fundamental design flaw that needs to be changed) The next iteration will be to change this design into a FOR loop and pull out the checking query from a table somewhere with all the info needed for email task, but is being put to the back of the priority. Error Message: Error: 0xC001F009 at CountCheckBetweenODSAndCleanSchema: The type of the value being assigned to variable "User::WasErrorEmailEverSent" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. Error: 0xC0019001 at Send Mail Task on count mismatch: The wrapper was unable to set the value of the variable specified in the ExecutionValueVariable property.   Screenshot of my Send Mail Task setup:

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • When do domain concepts become application constructs?

    - by Noren
    I recently posted a question regarding recovering a DDD architecture that became an anemic domain model into a multitier architecture and this question is a follow-on of sorts. My question is when do domain concepts become application constructs. My application is a local client C# 4/WPF with the following architecture: Presentation Layer Views ViewModels Business Layer ??? Domain Layer Classes that take the POCOs with primitive types and create domain concepts (e.g. image, layer, etc) Sanity checks values (e.g. image width 0) Interfaces for DTOs Interface for a repository that abstracts the filesystem Data Access Layer Classes that parse the proprietary binary files into POCOs with primitive types by explicit knowledge of the file format Implementation of domain DTOs Implementation of domain repository class Local Filesystem Proprietary binary files When does the MyImageType domain class with Int32 width, height, and Int32[] pixels become a System.Windows.Media.ImageDrawing? If I put it in the domain layer, it seems like implemenation details are being leaked (what if I didn't want to use WPF?). If I put it in the presentation layer, it seems like it's doing too much. If I create a business layer, it seems like it would be doing too little since there are few "rules" given the CRUD nature of the application. I think all of my reading has lead to analysis paralysis, so I thought fresh eyes might lend some perspective.

    Read the article

  • How to organize music files?

    - by newbie
    I'm trying to re-organize my music files. I copied them from my iPod before it died, so they have funky names, like DGEDH.mp3. I tried using a Windows utility to rename them from their ID3 tags, but it didn't work very well--it created a bunch of folders (one for each album in theory, but more like 7 or 8 in reality) and renamed the files with non-English characters. Most of the files are MP3s, but there's at least one or two other file types as well. I'd like to copy them to a new folder and try a Ubuntu utility to rename and re-organize them. In Windows, this would be straightforward--I'd use the Search function (with no file name specified) to list all of the files in the main folder and its subfolders, then drag and drop them to the new folder. What's the easiest way to accomplish the same thing in Ubuntu? The GUI search doesn't seem to accept wildcard characters, and I don't remember all of the file types I have, so it's not as simple as searching for "mp3". Many thanks!

    Read the article

  • What is the role of traditional issue tracker when Scrum / Kanban board is used?

    - by Borek
    From a very high level view, to me it seems there are generally 2 types of Project Management tools: Traditional issue trackers like Fogbugz, JIRA, BugZilla, Trac, Redmine etc. Virtual card boards / agile project management tools like Pivotal Tracker, GreenHopper, AgileZen, Trello etc. Sure, they overlap in one way or another, e.g. Pivotal Tracker tasks can be imported to JIRA, GreenHopper itself is implemented on top of JIRA issue base etc. but I think one can still see the difference in orientation between those two types of tools. Traditional issue tracker seems to be used even in companies otherwise doing agile project management. My question is, why do they do that? I also feel that we should use an issue tracker in my company but when I'm thinking about it, I'm not actually sure why should we need it. For example, Trello development seems to be managed by using Trello itself (see this virtual wall) even though they have access to Fogbugz, one of the best issue trackers around. So maybe we don't need traditional issue tracker when we'll be doing 100% of our work in an agile manner using one of the agile PM tools?

    Read the article

  • Why is an anemic domain model considered bad in C#/OOP, but very important in F#/FP?

    - by Danny Tuppeny
    In a blog post on F# for fun and profit, it says: In a functional design, it is very important to separate behavior from data. The data types are simple and "dumb". And then separately, you have a number of functions that act on those data types. This is the exact opposite of an object-oriented design, where behavior and data are meant to be combined. After all, that's exactly what a class is. In a truly object-oriented design in fact, you should have nothing but behavior -- the data is private and can only be accessed via methods. In fact, in OOD, not having enough behavior around a data type is considered a Bad Thing, and even has a name: the "anemic domain model". Given that in C# we seem to keep borrowing from F#, and trying to write more functional-style code; how come we're not borrowing the idea of separating data/behavior, and even consider it bad? Is it simply that the definition doesn't with with OOP, or is there a concrete reason that it's bad in C# that for some reason doesn't apply in F# (and in fact, is reversed)? (Note: I'm specifically interested in the differences in C#/F# that could change the opinion of what is good/bad, rather than individuals that may disagree with either opinion in the blog post).

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • Program instantly closing [migrated]

    - by Ben Clayton
    I made this program and when I compiled it there were no errors but the program just instantly closed, any answers would be appreciated. #include <iostream> //Main commands #include <string> // String commands #include <windows.h> // Sleep using namespace std; int main () { //Declaring variables float a; bool end; std::string input; end = false; // Making sure program doesn't end instantly cout << "Enter start then the number you want to count down from." << ".\n"; while (end = false){ cin >> input; cout << ".\n"; if (input.find("end") != std::string::npos) // Ends the program if user types end end = true; else if (input.find("start" || /* || is or operator*/ "restart") != std::string::npos) // Sets up the countdown timer if the user types start { cin >> a; cout << ".\n"; while (a>0){ Sleep(100); a = a - 0.1; cout << a << ".\n"; } cout << "Finished! Enter restart and then another number, or enter end to close the program" << ".\n"; } else // Tells user to start program cout << "Enter start"; } return 0; // Ends program when (end = true) }

    Read the article

  • Designing generic render/graphics component in C++?

    - by s73v3r
    I'm trying to learn more about Component Entity systems. So I decided to write a Tetris clone. I'm using the "style" of component-entity system where the Entity is just a bag of Components, the Components are just data, a Node is a set of Components needed to accomplish something, and a System is a set of methods that operates on a Node. All of my components inherit from a basic IComponent interface. I'm trying to figure out how to design the Render/Graphics/Drawable Components. Originally, I was going to use SFML, and everything was going to be good. However, as this is an experimental system, I got the idea of being able to change out the render library at will. I thought that since the Rendering would be fairly componentized, this should be doable. However, I'm having problems figuring out how I would design a common Interface for the different types of Render Components. Should I be using C++ Template types? It seems that having the RenderComponent somehow return it's own mesh/sprite/whatever to the RenderSystem would be the simplest, but would be difficult to generalize. However, letting the RenderComponent just hold on to data about what it would render would make it hard to re-use this component for different renderable objects (background, falling piece, field of already fallen blocks, etc). I realize this is fairly over-engineered for a regular Tetris clone, but I'm trying to learn about component entity systems and making interchangeable components. It's just that rendering seems to be the hardest to split out for me.

    Read the article

  • Why do memory-managed languages retain the `new` keyword?

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Create device receive SMS parse to text ( SMS Gateway )

    - by Chris Okyen
    I want to use a server as a device to run a script to parse a SMS text in the following way. I. The person types in a specific and special cell phone number (Similar to Facebook’s 32556 number used to post on your wall) II. The user types a text message. III. The user sends the text message. IV. The message is sent to some kind of Device (the server) or SMS Gateway and receives it. V. The thing described above that the message is sent to then parse the test message. I understand that these three question will mix Programming and Server Stuff and could reside here or at DBA.SE How would I make such a cell phone number (described in step I) that would be sent to the Device? How do I create the device that then would receive it? Finally, how do I Parse the text message? I don't want to pay for cloud space, server scripting stuff or server space; I want to just use a free webserver to do this totally free - meaning I will have to do more on my own... My question can be seen in more depth in this visual flowchart

    Read the article

  • How to test the tests?

    - by Ryszard Szopa
    We test our code to make it more correct (actually, less likely to be incorrect). However, the tests are also code -- they can also contain errors. And if your tests are buggy, they hardly make your code better. I can think of three possible types of errors in tests: Logical errors, when the programmer misunderstood the task at hand, and the tests do what he thought they should do, which is wrong; Errors in the underlying testing framework (eg. a leaky mocking abstraction); Bugs in the tests: the test is doing slightly different than what the programmer thinks it is. Type (1) errors seem to be impossible to prevent (unless the programmer just... gets smarter). However, (2) and (3) may be tractable. How do you deal with these types of errors? Do you have any special strategies to avoid them? For example, do you write some special "empty" tests, that only check the test author's presuppositions? Also, how do you approach debugging a broken test case?

    Read the article

  • Parallel Class/Interface Hierarchy with the Facade Design Pattern?

    - by Mike G
    About a third of my code is wrapped inside a Facade class. Note that this isn't a "God" class, but actually represents a single thing (called a Line). Naturally, it delegates responsibilities to the subsystem behind it. What ends up happening is that two of the subsystem classes (Output and Timeline) have all of their methods duplicated in the Line class, which effectively makes Line both an Output and a Timeline. It seems to make sense to make Output and Timeline interfaces, so that the Line class can implement them both. At the same time, I'm worried about creating parallel class and interface structures. You see, there are different types of lines AudioLine, VideoLine, which all use the same type of Timeline, but different types of Output (AudioOutput and VideoOutput, respectively). So that would mean that I'd have to create an AudioOutputInterface and VideoOutputInterface as well. So not only would I have to have parallel class hierarchy, but there would be a parallel interface hierarchy as well. Is there any solution to this design flaw? Here's an image of the basic structure (minus the Timeline class, though know that each Line has-a Timeline): NOTE: I just realized that the word 'line' in Timeline might make is sound like is does a similar function as the Line class. They don't, just to clarify.

    Read the article

  • Thoughts on type aliases/synonyms?

    - by Rei Miyasaka
    I'm going to try my best to frame this question in a way that doesn't result in a language war or list, because I think there could be a good, technical answer to this question. Different languages support type aliases to varying degrees. C# allows type aliases to be declared at the beginning of each code file, and they're valid only throughout that file. Languages like ML/Haskell use type aliases probably as much as they use type definitions. C/C++ are sort of a Wild West, with typedef and #define often being used seemingly interchangeably to alias types. The upsides of type aliasing don't invoke too much dispute: It makes it convenient to define composite types that are described naturally by the language, e.g. type Coordinate = float * float or type String = [Char]. Long names can be shortened: using DSBA = System.Diagnostics.DebuggerStepBoundaryAttribute. In languages like ML or Haskell, where function parameters often don't have names, type aliases provide a semblance of self-documentation. The downside is a bit more iffy: aliases can proliferate, making it difficult to read and understand code or to learn a platform. The Win32 API is a good example, with its DWORD = int and its HINSTANCE = HANDLE = void* and its LPHANDLE = HANDLE FAR* and such. In all of these cases it hardly makes any sense to distinguish between a HANDLE and a void pointer or a DWORD and an integer etc.. Setting aside the philosophical debate of whether a king should give complete freedom to their subjects and let them be responsible for themselves or whether they should have all of their questionable actions intervened, could there be a happy medium that would allow the benefits of type aliasing while mitigating the risk of its abuse? As an example, the issue of long names can be solved by good autocomplete features. Visual Studio 2010 for instance will alllow you to type DSBA in order to refer Intellisense to System.Diagnostics.DebuggerStepBoundaryAttribute. Could there be other features that would provide the other benefits of type aliasing more safely?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >