Search Results

Search found 25585 results on 1024 pages for 'multiple variables'.

Page 378/1024 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Do you know of some performances test of the different ways to get thread local storage in C++?

    - by Vicente Botet Escriba
    I'm doing a library that makes extensive use of a thread local variable. Can you point to some benchmarks that test the performances of the different ways to get thread local variables in C++: C++0x thread_local variables compiler extension (Gcc __thread, ...) boost::threads_specific_ptr pthread Windows ... Does C++0x thread_local performs much better on the compilers providing it?

    Read the article

  • Main point of Application Delegate class

    - by ahmet732
    What's the main point behind putting variables and method signatures inside ApplicationDelegate.h in Objective-C ? By doing this, all those methods and variables are seen by another view controller classes? Is that the point? And also: is there only one application delegate class inside each project?

    Read the article

  • PHP - getting content of the POST request

    - by user364622
    I have problem with getting the content. I don't know the names of the post variables so I can't do this using = $_Post['name'] because I don't know the "name". I want to catch all of the variables send by POST method. How can I get keys of the $_Post[] array and the values related with them?

    Read the article

  • Using 'this': where is good and where is not [closed]

    - by abatishchev
    I like to use 'this' statement for all non-local variables: for properties, for class variables, etc. I do this for code easy reading, easy understanding where from this variable has got. object someVar; object SomeProperty { get; set } void SomeMethod(object arg1, object arg2) { this.SomeProperty = arg1; this.someVar = arg2; } How do you think, what is proper way to use 'this'?

    Read the article

  • Why would one write global code inside a function definition-call pair?

    - by ssg
    I see examples where JavaScript code including jQuery and jslint use the notation below: (function(){ // do something })(); instead of: // do something I first thought this is just for local scoping, i.e. creating local variables for the code block without polluting global namespace. But I've seen instances without any local variables at all too. What am I missing here?

    Read the article

  • Comparing the values of two generic Numbers

    - by PartlyCloudy
    I want to compare to variables, both of type T extends Number. Now I want to know which of the two variables is greater than the other or equal. Unfortunately I don't know the exact type yet, I only know that it will be a subtype of java.lang.Number. How can I do that? Thanks!

    Read the article

  • How to reload python module from itself?

    - by mivulf
    I have a python module with a lot of variables. These variables are used and changed by many other modules. And I want to reload this module from itself when it happens (for refresh). How to do that? # ================================== # foo.py spam = 100 def set_spam(value): spam = value foo = reload(foo) # reload module from itself # ================================== # bar.py import foo print foo.spam # I expect 100 # assume that value changes here from some another modules (for example, from eggs.py, apple.py and fruit.py) foo.set_spam(200) print foo.spam # I expect 200

    Read the article

  • Optimizing division/exponential calculation

    - by Saltheart
    I've inherited a Visual Studio/VB.Net numerical simulation project that has a likely inefficient calculation. Profiling indicates that the function is called a lot (1 million times plus) and spends about 50% of the overall calculation within this function. Here is the problematic portion Result = (A * (E ^ C)) / (D ^ C * B) (where A-C are local double variables and D & E global double variables) Result is then compared to a threshold which might have additional improvements as well, but I'll leave them another day any thoughts or help would be appreciated Steve

    Read the article

  • What runs before main()?

    - by MikimotoH
    After testing on msvc8, I found: Parse GetCommandLine() to argc and argv Standard C Library initialization C++ Constructor of global variables These three things are called before entering main(). My questions are: Will this execution order be different when I porting my program to different compiler (gcc or armcc), or different platform? What stuff does Standard C Library initialization do? So far I know setlocale() is a must. Is it safe to call standard C functions inside C++ constructor of global variables?

    Read the article

  • Margin totals in xtabs

    - by James
    If you have 2 cross classifying variables you can use rowSums and colSums to produce margin totals on an xtabs output. But how can it be done if you have 3 classifying variables (ie margin totals in each sub table)?

    Read the article

  • HTML actual page link

    - by lore3d
    Hi all, I'm building a website, and i need to know the actual page address in which the user is in, in order to take users in the same page after login. The problem is that every page is generated from variables passed by url and query string, so I dont't know how to recover every variable and assign to it the correct value. How to recover variables name and assign them the correct values? Thanks lore (sorry for my English)

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • User Productivity Kit - Powerful Packages (Part 2)

    - by [email protected]
    In my first post on packages I described what a package is and how it can be used. I also started explaining some of the considerations that should be taken into account when determining how to arrange your packages. The first is when the files are interrelated and depend on one another such as an HTML file and it's graphics. A second consideration is how the files are used in your outlines. Let's say you're using a dozen Word doc files. You could place them all in a single package or put each Word doc file in a separate package but what's the right thing to do? There are several factors that will influence your decision. To understand the first, let me explain a function of UPK publishing. Take an outline in UPK that has an attachment (concept, frame link, or hyperlink) that points to a file in a package. When you publish this outline, the publishing engine will determine that there is a link to a file in the package and copy the contents of the package to the publishing destination directory. This is done to ensure that any interrelated files are kept together. For the situation where you have an HTML file with links to number of graphics files, this is a good thing. If, however, the package has a dozen unrelated Word doc files and you link to only one of them, all dozen Word documents will be copied to the publishing destination directory.  Whether or not this is a good thing is dependent on two things. First, are all of the files in the package used in the outline that you're publishing? Take an outline that includes links to all of the Word documents in that dozen document package I described earlier. For this situation, you may choose to keep all the files in a single package for convenience. A second consideration is how your organization leverages reuse in UPK. In this context, I'm referring to the link style of reuse such as when you link to the same topic from multiple UPK outlines and changes to the topic appear in both places. Take an example where you have the earlier mentioned dozen Word document package and an outline with a dozen topics in it. Each topic has an attachment pointing to one of the Word documents in the package (frame link, concept, etc.) If you're only publishing this outline, the single package probably works fine but what if you're reusing one of these topics in another outline? As I explained earlier, linking to one file in the package will result in all files in the package being copied to your published output. In this example, linking to one topic in the first outline will result in all dozen Word documents being copied to the published output. This may result in files in the output that you don't want there for business or size reasons. This is a situation in which you should consider placing each of the Word documents in it's own separate package. With each document in it's own package, that link to a single document will result in only that single package and single Word document being copied to the published output. In my last post I had described that packages are documents in the UPK library. When using the multi-user version of the UPK Developer you can leverage standard library capabilities for managing the files in these packages during the development process - capabilities such as check in / check out, history, etc. When structuring your packages take into consideration how the authors are going to be adding, modifying and deleting files from the packages. A single package is a single document in the UPK library. Like any other document in the library, a single user can check out the package and edit it at a time. If you have a large number of files in a single package and these must be modified by many users, you need to consider whether this will cause problems as multiple users compete to update the same package. If the files don't depend on each other consider placing the files in separate packages to reduce contention. I hope you've enjoyed these two posts on how you can leverage the power of packages in your content. In summary, consider the following when structuring your packages: Is the asset a single, standalone file or a set of files that depend on each other? Will all the files always be used together in a single outline or may only some of the files be needed based on how the content is reused across multiple outlines? Will multiple developers need to update the files in a single package or should you break it into multiple packages to reduce contention when checking out the document? We'd like to hear from you on how you're using packages in your content. Please add your comments below! Thank you and I hope these two posts have given you additional insights into how to use packages in your content and structure them for efficient use. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Availability Best Practices on Oracle VM Server for SPARC

    - by jsavit
    This is the first of a series of blog posts on configuring Oracle VM Server for SPARC (also called Logical Domains) for availability. This series will show how to how to plan for availability, improve serviceability, avoid single points of failure, and provide resiliency against hardware and software failures. Availability is a broad topic that has filled entire books, so these posts will focus on aspects specifically related to Oracle VM Server for SPARC. The goal is to improve Reliability, Availability and Serviceability (RAS): An article defining RAS can be found here. Oracle VM Server for SPARC Principles for Availability Let's state some guiding principles for availability that apply to Oracle VM Server for SPARC: Avoid Single Points Of Failure (SPOFs). Systems should be configured so a component failure does not result in a loss of application service. The general method to avoid SPOFs is to provide redundancy so service can continue without interruption if a component fails. For a critical application there may be multiple levels of redundancy so multiple failures can be tolerated. Oracle VM Server for SPARC makes it possible to configure systems that avoid SPOFs. Configure for availability at a level of resource and effort consistent with business needs. Effort and resource should be consistent with business requirements. Production has different availability requirements than test/development, so it's worth expending resources to provide higher availability. Even within the category of production there may be different levels of criticality, outage tolerances, recovery and repair time requirements. Keep in mind that a simple design may be more understandable and effective than a complex design that attempts to "do everything". Design for availability at the appropriate tier or level of the platform stack. Availability can be provided in the application, in the database, or in the virtualization, hardware and network layers they depend on - or using a combination of all of them. It may not be necessary to engineer resilient virtualization for stateless web applications applications where availability is provided by a network load balancer, or for enterprise applications like Oracle Real Application Clusters (RAC) and WebLogic that provide their own resiliency. It's (often) the same architecture whether virtual or not: For example, providing resiliency against a lost device path or failing disk media is done for the same reasons and may use the same design whether in a domain or not. It's (often) the same technique whether using domains or not: Many configuration steps are the same. For example, configuring IPMP or creating a redundant ZFS pool is pretty much the same within the guest whether you're in a guest domain or not. There are configuration steps and choices for provisioning the guest with the virtual network and disk devices, which we will discuss. Sometimes it is different using domains: There are new resources to configure. Most notable is the use of alternate service domains, which provides resiliency in case of a domain failure, and also permits improved serviceability via "rolling upgrades". This is an important differentiator between Oracle VM Server for SPARC and traditional virtual machine environments where all virtual I/O is provided by a monolithic infrastructure that itself is a SPOF. Alternate service domains are widely used to provide resiliency in production logical domains environments. Some things are done via logical domains commands, and some are done in the guest: For example, with Oracle VM Server for SPARC we provide multiple network connections to the guest, and then configure network resiliency in the guest via IP Multi Pathing (IPMP) - essentially the same as for non-virtual systems. On the other hand, we configure virtual disk availability in the virtualization layer, and the guest sees an already-resilient disk without being aware of the details. These blogs will discuss configuration details like this. Live migration is not "high availability" in the sense of "continuous availability": If the server is down, then you don't live migrate from it! (A cluster or VM restart elsewhere would be used). However, live migration can be part of the RAS (Reliability, Availability, Serviceability) picture by improving Serviceability - you can move running domains off of a box before planned service or maintenance. The blog Best Practices - Live Migration on Oracle VM Server for SPARC discusses this. Topics Here are some of the topics that will be covered: Network availability using IP Multipathing and aggregates Disk path availability using virtual disks defined with multipath groups ("mpgroup") Disk media resiliency configuring for redundant disks that can tolerate media loss Multiple service domains - this is probably the most significant item and the one most specific to Oracle VM Server for SPARC. It is very widely deployed in production environments as the means to provide network and disk availability, but it can be confusing. Subsequent articles will describe why and how to configure multiple service domains. Note, for the sake of precision: an I/O domain is any domain that has a physical I/O resource (such as a PCIe bus root complex). A service domain is a domain providing virtual device services to other domains; it is almost always an I/O domain too (so it can have something to serve). Resources Here are some important links; we'll be drawing on their content in the next several articles: Oracle VM Server for SPARC Documentation Maximizing Application Reliability and Availability with SPARC T5 Servers whitepaper by Gary Combs Maximizing Application Reliability and Availability with the SPARC M5-32 Server whitepaper by Gary Combs Summary Oracle VM Server for SPARC offers features that can be used to provide highly-available environments. This and the following blog entries will describe how to plan and deploy them.

    Read the article

  • "Imprinting" as a language feature?

    - by MKO
    Idea I had this idea for a language feature that I think would be useful, does anyone know of a language that implements something like this? The idea is that besides inheritance a class can also use something called "imprinting" (for lack of better term). A class can imprint one or several (non-abstract) classes. When a class imprints another class it gets all it's properties and all it's methods. It's like the class storing an instance of the imprinted class and redirecting it's methods/properties to it. A class that imprints another class therefore by definition also implements all it's interfaces and it's abstract class. So what's the point? Well, inheritance and polymorphism is hard to get right. Often composition gives far more flexibility. Multiple inheritance offers a slew of different problems without much benefits (IMO). I often write adapter classes (in C#) by implementing some interface and passing along the actual methods/properties to an encapsulated object. The downside to that approach is that if the interface changes the class breaks. You also you have to put in a lot of code that does nothing but pass things along to the encapsulated object. A classic example is that you have some class that implements IEnumerable or IList and contains an internal class it uses. With this technique things would be much easier Example (c#) [imprint List<Person> as peopleList] public class People : PersonBase { public void SomeMethod() { DoSomething(this.Count); //Count is from List } } //Now People can be treated as an List<Person> People people = new People(); foreach(Person person in people) { ... } peopleList is an alias/variablename (of your choice)used internally to alias the instance but can be skipped if not needed. One thing that's useful is to override an imprinted method, that could be achieved with the ordinary override syntax public override void Add(Person person) { DoSomething(); personList.Add(person); } note that the above is functional equivalent (and could be rewritten by the compiler) to: public class People : PersonBase , IList<Person> { private List<Person> personList = new List<Person>(); public override void Add(object obj) { this.personList.Add(obj) } public override int IndexOf(object obj) { return personList.IndexOf(obj) } //etc etc for each signature in the interface } only if IList changes your class will break. IList won't change but an interface that you, someone in your team, or a thirdparty has designed might just change. Also this saves you writing a whole lot of code for some interfaces/abstract classes. Caveats There's a couple of gotchas. First we, syntax must be added to call the imprinted classes's constructors from the imprinting class constructor. Also, what happends if a class imprints two classes which have the same method? In that case the compiler would detect it and force the class to define an override of that method (where you could chose if you wanted to call either imprinted class or both) So what do you think, would it be useful, any caveats? It seems it would be pretty straightforward to implement something like that in the C# language but I might be missing something :) Sidenote - Why is this different from multiple inheritance Ok, so some people have asked about this. Why is this different from multiple inheritance and why not multiple inheritance. In C# methods are either virtual or not. Say that we have ClassB who inherits from ClassA. ClassA has the methods MethodA and MethodB. ClassB overrides MethodA but not MethodB. Now say that MethodB has a call to MethodA. if MethodA is virtual it will call the implementation that ClassB has, if not it will use the base class, ClassA's MethodA and you'll end up wondering why your class doesn't work as it should. By the terminology sofar you might already confused. So what happens if ClassB inherits both from ClassA and another ClassC. I bet both programmers and compilers will be scratching their heads. The benefit of this approach IMO is that the imprinting classes are totally encapsulated and need not be designed with multiple inheritance in mind. You can basically imprint anything.

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

  • Customize Team Build 2010 – Part 16: Specify the relative reference path

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application Part 16: Specify the relative reference path As I have already blogged about, it is not intuitive how to specify the paths where the build server has to look for references that are stored in Source Control. It is a common practice to store 3rd party libraries in Source Control, so they are available to everyone, everyone uses the same version of the libraries and updating a library can be done centrally. In Team Build 2010 these paths are specified as a parameter for MSBuild. What we will do in this post is building the values for this parameter based on the values in an argument. You are now pretty aware how to customize the build template, so let’s do the modifications in another way. Instead of opening the xaml file in the workflow designer, we open it in the XML editor. You can open it in the XML Editor by either selecting the Open with menu (see the context menu), or by choosing the View code option. To add this functionality we need to: Specify a new argument Add the argument to the metadata Build the absolute paths for the references and add these paths to the MSBuild arguments 1. Specify a new argument Locate at the top of the document the Members (which are the arguments) of the XAML and add the following line <x:Property Name="ReferencePaths" Type="InArgument(s:String[])" /> 2. Add the argument to the metadata Then locate the line <mtbw:ProcessParameterMetadataCollection> and paste the following line <mtbw:ProcessParameterMetadata Category="Misc" Description="The list of reference paths, relative to the root path in the Workspace mapping." DisplayName="Reference paths" ParameterName="ReferencePaths" /> 3. Build the absolute paths for the references and add these paths to the MSBuild arguments Now locate the place where the assignments are done to the variables used in the agent. And add the following lines after the last Assign activity         <Sequence DisplayName="Initialize ReferencePath" sap:VirtualizedContainerService.HintSize="464,428">           <Sequence.Variables>             <Variable x:TypeArguments="x:String" Name="ReferencePathsArgument">               <Variable.Default>                 <Literal x:TypeArguments="x:String" Value="" />               </Variable.Default>             </Variable>           </Sequence.Variables>           <sap:WorkflowViewStateService.ViewState>             <scg:Dictionary x:TypeArguments="x:String, x:Object">               <x:Boolean x:Key="IsExpanded">True</x:Boolean>             </scg:Dictionary>           </sap:WorkflowViewStateService.ViewState>           <ForEach x:TypeArguments="x:String" DisplayName="Iterate through the paths" sap:VirtualizedContainerService.HintSize="287,206" mtbwt:BuildTrackingParticipant.Importance="Low" Values="[ReferencePaths]">             <ActivityAction x:TypeArguments="x:String">               <ActivityAction.Argument>                 <DelegateInArgument x:TypeArguments="x:String" Name="path" />               </ActivityAction.Argument>               <Assign x:TypeArguments="x:String" DisplayName="Build ReferencePath argument" sap:VirtualizedContainerService.HintSize="257,100" mtbwt:BuildTrackingParticipant.Importance="Low"  To="[ReferencePathsArgument]" Value="[If(String.IsNullOrEmpty(ReferencePathsArgument), &quot;&quot;, ReferencePathsArgument + &quot;;&quot;) + IO.Path.Combine(SourcesDirectory, path)]" />             </ActivityAction>           </ForEach>           <Assign DisplayName="Append the reference paths to the MSBuild Arguments" sap:VirtualizedContainerService.HintSize="287,58">             <Assign.To>               <OutArgument x:TypeArguments="x:String">[MSBuildArguments]</OutArgument>             </Assign.To>             <Assign.Value>               <InArgument x:TypeArguments="x:String">[String.Format("{0} /p:ReferencePath=""{1}""", MSBuildArguments, ReferencePathsArgument)]</InArgument>             </Assign.Value>           </Assign>         </Sequence> Now you can use the template to specify the paths relative to SourcesDirectory. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Accessing PerSession service simultaneously in WCF using C#

    - by krishna555
    1.) I have a main method Processing, which takes string as an arguments and that string contains some x number of tasks. 2.) I have another method Status, which keeps track of first method by using two variables TotalTests and CurrentTest. which will be modified every time with in a loop in first method(Processing). 3.) When more than one client makes a call parallely to my web service to call the Processing method by passing a string, which has different tasks will take more time to process. so in the mean while clients will be using a second thread to call the Status method in the webservice to get the status of the first method. 4.) when point number 3 is being done all the clients are supposed to get the variables(TotalTests,CurrentTest) parallely with out being mixed up with other client requests. 5.) The code that i have provided below is getting mixed up variables results for all the clients when i make them as static. If i remove static for the variables then clients are just getting all 0's for these 2 variables and i am unable to fix it. Please take a look at the below code. [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] public class Service1 : IService1 { public int TotalTests = 0; public int CurrentTest = 0; public string Processing(string OriginalXmlString) { XmlDocument XmlDoc = new XmlDocument(); XmlDoc.LoadXml(OriginalXmlString); this.TotalTests = XmlDoc.GetElementsByTagName("TestScenario").Count; //finding the count of total test scenarios in the given xml string this.CurrentTest = 0; while(i<10) { ++this.CurrentTest; i++; } } public string Status() { return (this.TotalTests + ";" + this.CurrentTest); } } server configuration <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> client configuration <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> Below mentioned is my client code class Program { static void Main(string[] args) { Program prog = new Program(); Thread JavaClientCallThread = new Thread(new ThreadStart(prog.ClientCallThreadRun)); Thread JavaStatusCallThread = new Thread(new ThreadStart(prog.StatusCallThreadRun)); JavaClientCallThread.Start(); JavaStatusCallThread.Start(); } public void ClientCallThreadRun() { XmlDocument doc = new XmlDocument(); doc.Load(@"D:\t72CalculateReasonableWithdrawal_Input.xml"); bool error = false; Service1Client Client = new Service1Client(); string temp = Client.Processing(doc.OuterXml, ref error); } public void StatusCallThreadRun() { int i = 0; Service1Client Client = new Service1Client(); string temp; while (i < 10) { temp = Client.Status(); Thread.Sleep(1500); Console.WriteLine("TotalTestScenarios;CurrentTestCase = {0}", temp); i++; } } } Can any one please help.

    Read the article

  • having issue while making the client calls persession in c# wcf

    - by krishna555
    1.) I have a main method Processing, which takes string as an arguments and that string contains some x number of tasks. 2.) I have another method Status, which keeps track of first method by using two variables TotalTests and CurrentTest. which will be modified every time with in a loop in first method(Processing). 3.) When more than one client makes a call parallely to my web service to call the Processing method by passing a string, which has different tasks will take more time to process. so in the mean while clients will be using a second thread to call the Status method in the webservice to get the status of the first method. 4.) when point number 3 is being done all the clients are supposed to get the variables(TotalTests,CurrentTest) parallely with out being mixed up with other client requests. 5.) The code that i have provided below is getting mixed up variables results for all the clients when i make them as static. If i remove static for the variables then clients are just getting all 0's for these 2 variables and i am unable to fix it. Please take a look at the below code. [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] public class Service1 : IService1 { public int TotalTests = 0; public int CurrentTest = 0; public string Processing(string OriginalXmlString) { XmlDocument XmlDoc = new XmlDocument(); XmlDoc.LoadXml(OriginalXmlString); this.TotalTests = XmlDoc.GetElementsByTagName("TestScenario").Count; //finding the count of total test scenarios in the given xml string this.CurrentTest = 0; while(i<10) { ++this.CurrentTest; i++; } } public string Status() { return (this.TotalTests + ";" + this.CurrentTest); } } server configuration <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> client configuration <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> Below mentioned is my client code class Program { static void Main(string[] args) { Program prog = new Program(); Thread JavaClientCallThread = new Thread(new ThreadStart(prog.ClientCallThreadRun)); Thread JavaStatusCallThread = new Thread(new ThreadStart(prog.StatusCallThreadRun)); JavaClientCallThread.Start(); JavaStatusCallThread.Start(); } public void ClientCallThreadRun() { XmlDocument doc = new XmlDocument(); doc.Load(@"D:\t72CalculateReasonableWithdrawal_Input.xml"); bool error = false; Service1Client Client = new Service1Client(); string temp = Client.Processing(doc.OuterXml, ref error); } public void StatusCallThreadRun() { int i = 0; Service1Client Client = new Service1Client(); string temp; while (i < 10) { temp = Client.Status(); Thread.Sleep(1500); Console.WriteLine("TotalTestScenarios;CurrentTestCase = {0}", temp); i++; } } } Can any one please help.

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • Oracle HRMS API – Update Employee Assignment

    - by PRajkumar
    To Update Supervisor, Manager Flag, Bargaining Unit, Labour Union Member Flag, Gre, Time Card, Work Schedule, Normal Hours, Frequency, Time Normal Finish, Time Normal Start, Default Code Combination, Set of Books Id API -- hr_assignment_api.update_emp_asg   To Update Grade, Location, Job, Payroll, Organization, Employee Category, People Group API -- hr_assignment_api.update_emp_asg_criteria   Example --   DECLARE    -- Local Variables    -- -----------------------    lc_dt_ud_mode           VARCHAR2(100)    := NULL;    ln_assignment_id       NUMBER                  := 33561;    ln_supervisor_id        NUMBER                  := 2;    ln_object_number       NUMBER                  := 1;    ln_people_group_id  NUMBER                  := 1;      -- Out Variables for Find Date Track Mode API    -- -----------------------------------------------------------------    lb_correction                           BOOLEAN;    lb_update                                 BOOLEAN;    lb_update_override              BOOLEAN;    lb_update_change_insert   BOOLEAN;       -- Out Variables for Update Employee Assignment API    -- ----------------------------------------------------------------------------    ln_soft_coding_keyflex_id       HR_SOFT_CODING_KEYFLEX.SOFT_CODING_KEYFLEX_ID%TYPE;    lc_concatenated_segments       VARCHAR2(2000);    ln_comment_id                             PER_ALL_ASSIGNMENTS_F.COMMENT_ID%TYPE;    lb_no_managers_warning        BOOLEAN;  -- Out Variables for Update Employee Assgment Criteria  -- -------------------------------------------------------------------------------  ln_special_ceiling_step_id                    PER_ALL_ASSIGNMENTS_F.SPECIAL_CEILING_STEP_ID%TYPE;  lc_group_name                                          VARCHAR2(30);  ld_effective_start_date                             PER_ALL_ASSIGNMENTS_F.EFFECTIVE_START_DATE%TYPE;  ld_effective_end_date                              PER_ALL_ASSIGNMENTS_F.EFFECTIVE_END_DATE%TYPE;  lb_org_now_no_manager_warning   BOOLEAN;  lb_other_manager_warning                  BOOLEAN;  lb_spp_delete_warning                          BOOLEAN;  lc_entries_changed_warning                VARCHAR2(30);  lb_tax_district_changed_warn             BOOLEAN;   BEGIN    -- Find Date Track Mode    -- --------------------------------    dt_api.find_dt_upd_modes    (    p_effective_date                  => TO_DATE('12-JUN-2011'),         p_base_table_name            => 'PER_ALL_ASSIGNMENTS_F',         p_base_key_column           => 'ASSIGNMENT_ID',         p_base_key_value               => ln_assignment_id,          -- Output data elements          -- --------------------------------          p_correction                          => lb_correction,          p_update                                => lb_update,          p_update_override              => lb_update_override,          p_update_change_insert   => lb_update_change_insert      );      IF ( lb_update_override = TRUE OR lb_update_change_insert = TRUE )    THEN        -- UPDATE_OVERRIDE        -- ---------------------------------        lc_dt_ud_mode := 'UPDATE_OVERRIDE';    END IF;     IF ( lb_correction = TRUE )   THEN       -- CORRECTION       -- ----------------------      lc_dt_ud_mode := 'CORRECTION';   END IF;     IF ( lb_update = TRUE )   THEN       -- UPDATE       -- --------------       lc_dt_ud_mode := 'UPDATE';    END IF;     -- Update Employee Assignment   -- ---------------------------------------------  hr_assignment_api.update_emp_asg  ( -- Input data elements   -- ------------------------------   p_effective_date                              => TO_DATE('12-JUN-2011'),   p_datetrack_update_mode         => lc_dt_ud_mode,   p_assignment_id                            => ln_assignment_id,   p_supervisor_id                              => NULL,   p_change_reason                           => NULL,   p_manager_flag                              => 'N',   p_bargaining_unit_code              => NULL,   p_labour_union_member_flag   => NULL,   p_segment1                                       => 204,   p_segment3                                       => 'N',   p_normal_hours                              => 10,   p_frequency                                       => 'W',   -- Output data elements   -- -------------------------------   p_object_version_number             => ln_object_number,   p_soft_coding_keyflex_id              => ln_soft_coding_keyflex_id,   p_concatenated_segments             => lc_concatenated_segments,   p_comment_id                                   => ln_comment_id,   p_effective_start_date                      => ld_effective_start_date,   p_effective_end_date                        => ld_effective_end_date,   p_no_managers_warning               => lb_no_managers_warning,   p_other_manager_warning            => lb_other_manager_warning  );    -- Find Date Track Mode for Second API   -- ------------------------------------------------------   dt_api.find_dt_upd_modes   (  p_effective_date                   => TO_DATE('12-JUN-2011'),      p_base_table_name            => 'PER_ALL_ASSIGNMENTS_F',      p_base_key_column           => 'ASSIGNMENT_ID',      p_base_key_value               => ln_assignment_id,      -- Output data elements      -- -------------------------------      p_correction                           => lb_correction,      p_update                                 => lb_update,      p_update_override               => lb_update_override,      p_update_change_insert    => lb_update_change_insert   );     IF ( lb_update_override = TRUE OR lb_update_change_insert = TRUE )   THEN     -- UPDATE_OVERRIDE     -- --------------------------------     lc_dt_ud_mode := 'UPDATE_OVERRIDE';   END IF;      IF ( lb_correction = TRUE )    THEN      -- CORRECTION      -- ----------------------      lc_dt_ud_mode := 'CORRECTION';   END IF;      IF ( lb_update = TRUE )    THEN      -- UPDATE      -- --------------      lc_dt_ud_mode := 'UPDATE';    END IF;    -- Update Employee Assgment Criteria  -- -----------------------------------------------------  hr_assignment_api.update_emp_asg_criteria  ( -- Input data elements   -- ------------------------------   p_effective_date                                   => TO_DATE('12-JUN-2011'),   p_datetrack_update_mode               => lc_dt_ud_mode,   p_assignment_id                                 => ln_assignment_id,   p_location_id                                        => 204,   p_grade_id                                             => 29,   p_job_id                                                  => 16,   p_payroll_id                                          => 52,   p_organization_id                               => 239,   p_employment_category                    => 'FR',   -- Output data elements   -- -------------------------------   p_people_group_id                              => ln_people_group_id,   p_object_version_number                   => ln_object_number,   p_special_ceiling_step_id                  => ln_special_ceiling_step_id,   p_group_name                                        => lc_group_name,   p_effective_start_date                           => ld_effective_start_date,   p_effective_end_date                             => ld_effective_end_date,   p_org_now_no_manager_warning  => lb_org_now_no_manager_warning,   p_other_manager_warning                 => lb_other_manager_warning,   p_spp_delete_warning                         => lb_spp_delete_warning,   p_entries_changed_warning              => lc_entries_changed_warning,   p_tax_district_changed_warning     => lb_tax_district_changed_warn  );    COMMIT; EXCEPTION          WHEN OTHERS THEN                       ROLLBACK;                       dbms_output.put_line(SQLERRM); END; / SHOW ERR;    

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >