Search Results

Search found 16208 results on 649 pages for 'pass community'.

Page 382/649 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Custom Profile Provider with Web Deployment Project

    - by Ben Griswold
    I wrote about implementing a custom profile provider inside of your ASP.NET MVC application yesterday. If you haven’t read the article, don’t sweat it.  Most of the stuff I write is rubbish anyway. Since you have joined me today, though, I might as well offer up a little tip: you can run into trouble, like I did, if you enable your custom profile provider inside of an application which is deployed using a Web Deployment Project.  Everything will run great on your local machine and you’ll probably take an early lunch because you got the code running in no time flat and the build server is happy and all tests pass and, gosh, maybe you’ll just cut out early because it is Friday after all.  But then the first user hits the integration machine and, that’s right, yellow screen of death. Lucky you, just as you’re walking out the door, the user kindly sends the exception message and stack trace: Value cannot be null. Parameter name: type Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Stack Trace: [ArgumentNullException: Value cannot be null. Parameter name: type] System.Activator.CreateInstance(Type type, Boolean nonPublic) +2796915 System.Web.Profile.ProfileBase.CreateMyInstance(String username, Boolean isAuthenticated) +76 System.Web.Profile.ProfileBase.Create(String username, Boolean isAuthenticated) +312 User error?  Not this time. Damn! One hour later… you notice the harmless “Treat as library component (remove the App_Code.compiled file)” setting on the Output Assemblies Tab of your Web Deployment Project. You have no idea why, but you uncheck it.  You test and everything works great both locally and on the integration machine.  Application users think you’re the best and you’re still going to catch the last half hour of happy hour.  Happy Friday.

    Read the article

  • What's the best license for my website?

    - by John Maxim
    I have developed a unique website but do not have a lot of fund to protect it with trademark or patents. I'm looking for suggestions so that when my supervisor gets my codes, some laws restrict anyone from copying it and claim their work. I'm in the middle of thinking, making the application a commercial one or never allowing it to be copied at all. What kind of steps am I required to take in order to make full measurements so my applications are fully-protected? I've come across a few, one under my consideration is MIT license. Some say we can have a mixture of both commercial and MIT. I would also like to be able to distribute some functions so it can be modified by others but I'd still retain the ownership. Last but not least, it's confusing when I think of protecting the whole website, and protecting codes by codes in division. How should we go about this? Thanks. N.B I have to pass it over to the supervisor as this is a Uni project. I need this to be done within 2 weeks. So time to get my App protected is a factor here.

    Read the article

  • Keep basic game physics separate from basic game object? [on hold]

    - by metamorphosis
    If anybody has dealt with a similar situation I'd be interested in your experience/wisdom, I'm developing a 2D game library in C++, I have game objects which have very basic physics, they also have movement classes attached to differing states, for example, a different movement type based on whether the character is jumping, on ice, whatever. In terms of storing velocity and acceleration impulses, are they best held by the object? Or by the associated movement class? The reason I ask is that I can see advantages to both approaches- if you store physics data in the movement class, you have to pass physics information between class instances when a state change occurs (ie. impulses, gravity etc) but the class has total control over whether those physics are updated or not. An obvious example of how this would be useful was if an object was affected by something which caused it to ignore gravity, or something like that. on the other hand if you store the physics data in the object class, it feels more logical, you don't have to go around passing physics impulses and gravity etc, however the control that the movement class has over the object's physics becomes more convoluted. Basically the difference is between: object->physics stacks (acceleration impulses etc) ->physics functions ->movement type <-movement type makes physics function calls through object and object->movement type->physics stacks ->physics functions ->object forwards external physics calls onto movement type ->object transfers physics stacks between movement types when state change occurs Are there best practices here?

    Read the article

  • Passing multiple Vertex Attributes in GLSL 130

    - by Roy T.
    (note this question is closely related to this one however I didn't fully understand the accepted answer) To support videocards in laptops I have to rewrite my GLSL 330 shaders to GLSL 130. I'm trying to do this but somehow I don't get vertex attributes to work properly. My 330 shaders look like this: #version 330 layout(location = 0) in vec4 position; layout(location = 3) in vec4 color; smooth out vec4 theColor; void main() { gl_Position = position; theColor = color; } Now this explicit layout is not allowed in GLSL 130 so I referenced this page to see what the default layouts for some values would be. As you can see position should be the 0th vertex attribute and color should be the 3rd vertex attribute. Because this is a test case I had already configured my explicit layouts in the same way, which worked, so I now simply rewrote my shader to this and expected it to work: #version 130 smooth out vec4 theColor; void main() { gl_Position = gl_Vertex; theColor = gl_Color; } However this doesn't work, the value of gl_Color is always (1,1,1,1). So how should I pass multiple vertex attributes to my GLSL 130 shaders? For reference, this is how I set my vertex buffer object and attributes (I've just adapted this tutorial to JAVA+JOGL) gl.glBindBuffer(GL3.GL_ARRAY_BUFFER, vertex_buffer_id); gl.glEnableVertexAttribArray(0); gl.glEnableVertexAttribArray(3); gl.glVertexAttribPointer(0, 4 , GL3.GL_FLOAT, false, 0, 0); gl.glVertexAttribPointer(3, 4, GL3.GL_FLOAT, false, 0, 4*4*4); gl.glDrawArrays(GL3.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableVertexAttribArray(0); gl.glDisableVertexAttribArray(3); EDIT I solved the problem by querying for the layout locations of position an color using glGetAttribLocation however I still don't understand why the 'hardcoded' values like gl_Color didn't work, can't I upload data in there as normal? Shouldn't they be used?

    Read the article

  • Gimme Gimme Gimme!

    - by steve.diamond
    Today is my birthday. And you know, there used to be a time when I dreaded birthdays. But now, as I reach my 37th year (that's my Polar Body Test age), I'm re-learning to really really appreciate being here. Now, what the heck does any of this have to do with CRM or this blog? Easy! Here is the present I would like from you. 1) Please tell us how we're doing on this blog. Do you like what you're seeing? Do you NOT like what you're seeing? Why? What types of topics would you like to see more or less of from us? Do you think we're running too much of an Oracle infomercial here? Conversely, would you like us to spend more time focusing on Oracle solutions? If so, which ones are of most interest to you? 2) Let's assume you DO like what you're seeing and reading here. Please tell a friend. Pass it on. You can write a comment below or submit a comment on our Facebook Fan page (http://facebook.com/oraclecrm). If you're an Oracle employee, please simply send me an email. And if you work here at HQ, bring me some key lime pie. And last but not least, thank you!

    Read the article

  • Crashplan Is Offering Free Yearly Plans [Black Friday]

    - by Jason Fitzpatrick
    Even if you’re eschewing Black Friday and all the shopping that goes with it, we’ve got a deal for you that’s too good to pass up: a free year of remote backup from CrashPlan. CrashPlan is running a fantastic Black Friday promotion. Starting at 6AM EST they’re offering all their plans for 100% off–we just picked up a family plan, normally $119 a year, for $0. Every two hours they’ll be incrementally decreasing the discount until Monday evening when the sale ends (even if you miss the early part of the sale the discount will still be 42% off come Monday). This is an absolute fantastic deal on a service everyone can use. If you followed along with our guide to using CrashPlan to backup your data at a friend’s house for free, this is a perfect time to add in a year of backup service to add another player to your backup routine. CrashPlan Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • The battery indicator& Power setting panel shows wrong battery state

    - by Eastsun
    My laptop is Thinkpad E420 with Ubuntu 12.04 64-bit installed, the kernel version is 3.2.0-33-generic. I have set the battery threshold as 60% via windows7. It seems that the threshold auto effected in Ubuntu. However, there are some problems of the battery indicator's state. I'll list some information of the battery state as following: (Note that in terminal ubuntu says that battery charging state is charged, while the power setting panel shows that the battery state is charging as well as the battery indicator shows.) $ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok *charging state: charged* present rate: 0 mW remaining capacity: 18200 mWh present voltage: 16103 mV battery indicator state Power Setting Panel Is there any way to fix the problem? Edit Add some result via *sudo fwts battery - battery.log * 3 passed, 4 failed, 0 warnings, 0 aborted, 0 skipped, 0 info only. Test Failure Summary =============================== Critical failures: NONE High failures: 2 battery: Did not detect any ACPI battery events. battery: Could not detect ACPI events for battery BAT0. Medium failures: 1 battery: Battery BAT0 claims it's charging but no charge is added Low failures: 1 battery: System firmware may not support cycle count interface or it reports it incorrectly for battery BAT0. Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ battery | 3| 4| | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 4| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Any help would be appreciated!

    Read the article

  • How do I get information about the level to the player object?

    - by pangaea
    I have a design problem with my Player and Level class in my game. So below is a picture of the game. The problem is I don't want to move on the black space and only the white space. I know how to do this as all I need to do is get the check for the sf::Color::Black and I have methods to do this in the Level class. The problem is this piece of code void Game::input() { player.input(); } void Game::update() { (*level).update(); player.update(); } void Game::render() { (*level).render(); player.render(); } So as you there is a problem in that how do I get the map information from the Level class to the Player class. Now I was thinking if I made the Player position static and pass it into the Level as parameter in update I could do it. The problem is interaction. I don't know what to do. I could maybe make player go into the Level class. However, what if I want multiple levels? So I have big design problems that I'm trying to solve.

    Read the article

  • Rolling your own Hackathon

    - by Terrance
    Background Info Hey, I pitched the idea of a company Hackathon that would donate our time to a charity to work on a project (for free) to improve morale in my company and increase developer cohesion. As it turns out most like the idea but, guess who's gonna be the one to put it together. lol Yeah me. I should add that we are a fairly small shop with about 10-12 programmers (some pull double duty as programmers, inters etc..) So, that might make things a bit easier. Base Question While I am no means a project manager or of any level of authority (Entry level guy) I was wondering if anyone knew the best approach for someone in my position to put together such an even with possibly (some) company backing. Or for that matter have any helpful advice to pass along to a young padawan. So far..... As of right now it is just an idea so, to start with I presumably would have to put together some sort of proposal and do some that office stuff that I became a programmer to steer clear of to some extent.

    Read the article

  • WPF Login Verification Using Active Directory

    - by psheriff
    Back in October of 2009 I created a WPF login screen (Figure 1) that just showed how to create the layout for a login screen. That one sample is probably the most downloaded sample we have. So in this blog post, I thought I would update that screen and also hook it up to show how to authenticate your user against Active Directory. Figure 1: Original WPF Login Screen I have updated not only the code behind for this login screen, but also the look and feel as shown in Figure 2. Figure 2: An Updated WPF Login Screen The UI To create the UI for this login screen you can refer to my October of 2009 blog post to see how to create the borderless window. You can then look at the sample code to see how I created the linear gradient brush for the background. There are just a few differences in this screen compared to the old version. First, I changed the key image and instead of using words for the Cancel and Login buttons, I used some icons. Secondly I added a text box to hold the Domain name that you wish to authenticate against. This text box is automatically filled in if you are connected to a network. In the Window_Loaded event procedure of the winLogin window you can retrieve the user’s domain name from the Environment.UserDomainName property. For example: txtDomain.Text = Environment.UserDomainName The ADHelper Class Instead of coding the call to authenticate the user directly in the login screen I created an ADHelper class. This will make it easier if you want to add additional AD calls in the future. The ADHelper class contains just one method at this time called AuthenticateUser. This method authenticates a user name and password against the specified domain. The login screen will gather the credentials from the user such as their user name and password, and also the domain name to authenticate against. To use this ADHelper class you will need to add a reference to the System.DirectoryServices.dll in .NET. The AuthenticateUser Method In order to authenticate a user against your Active Directory you will need to supply a valid LDAP path string to the constructor of the DirectoryEntry class. The LDAP path string will be in the format LDAP://DomainName. You will also pass in the user name and password to the constructor of the DirectoryEntry class as well. With a DirectoryEntry object populated with this LDAP path string, the user name and password you will now pass this object to the constructor of a DirectorySearcher object. You then perform the FindOne method on the DirectorySearcher object. If the DirectorySearcher object returns a SearchResult then the credentials supplied are valid. If the credentials are not valid on the Active Directory then an exception is thrown. C#public bool AuthenticateUser(string domainName, string userName,  string password){  bool ret = false;   try  {    DirectoryEntry de = new DirectoryEntry("LDAP://" + domainName,                                           userName, password);    DirectorySearcher dsearch = new DirectorySearcher(de);    SearchResult results = null;     results = dsearch.FindOne();     ret = true;  }  catch  {    ret = false;  }   return ret;} Visual Basic Public Function AuthenticateUser(ByVal domainName As String, _ ByVal userName As String, ByVal password As String) As Boolean  Dim ret As Boolean = False   Try    Dim de As New DirectoryEntry("LDAP://" & domainName, _                                 userName, password)    Dim dsearch As New DirectorySearcher(de)    Dim results As SearchResult = Nothing     results = dsearch.FindOne()     ret = True  Catch    ret = False  End Try   Return retEnd Function In the Click event procedure under the Login button you will find the following code that will validate the credentials that the user types into the login window. C#private void btnLogin_Click(object sender, RoutedEventArgs e){  ADHelper ad = new ADHelper();   if(ad.AuthenticateUser(txtDomain.Text,         txtUserName.Text, txtPassword.Password))    DialogResult = true;  else    MessageBox.Show("Unable to Authenticate Using the                      Supplied Credentials");} Visual BasicPrivate Sub btnLogin_Click(ByVal sender As Object, _ ByVal e As RoutedEventArgs)  Dim ad As New ADHelper()   If ad.AuthenticateUser(txtDomain.Text, txtUserName.Text, _                         txtPassword.Password) Then    DialogResult = True  Else    MessageBox.Show("Unable to Authenticate Using the                      Supplied Credentials")  End IfEnd Sub Displaying the Login Screen At some point when your application launches, you will need to display your login screen modally. Below is the code that you would call to display the login form (named winLogin in my sample application). This code is called from the main application form, and thus the owner of the login screen is set to “this”. You then call the ShowDialog method on the login screen to have this form displayed modally. After the user clicks on one of the two buttons you need to check to see what the DialogResult property was set to. The DialogResult property is a nullable type and thus you first need to check to see if the value has been set. C# private void DisplayLoginScreen(){  winLogin win = new winLogin();   win.Owner = this;  win.ShowDialog();  if (win.DialogResult.HasValue && win.DialogResult.Value)    MessageBox.Show("User Logged In");  else    this.Close();} Visual Basic Private Sub DisplayLoginScreen()  Dim win As New winLogin()   win.Owner = Me  win.ShowDialog()  If win.DialogResult.HasValue And win.DialogResult.Value Then    MessageBox.Show("User Logged In")  Else    Me.Close()  End IfEnd Sub Summary Creating a nice looking login screen is fairly simple to do in WPF. Using the Active Directory services from a WPF application should make your desktop programming task easier as you do not need to create your own user authentication system. I hope this article gave you some ideas on how to create a login screen in WPF. NOTE: You can download the complete sample code for this blog entry at my website: http://www.pdsa.com/downloads. Click on Tips & Tricks, then select 'WPF Login Verification Using Active Directory' from the drop down list. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Book Review: Getting Started With Window 8 Apps By Ben Dewey

    - by Tim Murphy
    When O’Reilly gave me an opportunity to review this book I was excited.  It gave me a reason to finally put some time into this new platform and what developers will need to learn in order to be successful. This book by Ben Dewey is only 92 pages long, so if you were looking for an in-depth treatment of Windows 8 development you will need supplemental materials.  It is also due for an update from the perspective of recent changes made by Microsoft prior to the final release of the OS and tools.  This causes a few issues if you try to run the code samples because of namespace changes. I was encouraged by the fact that the author didn’t do the typical “hello world” app.  He uses a lot of pattern based development techniques and hits many of the main topics including: Application lifecycle Charms integration Tiles Sensors The lifecycle is critical for anyone who hasn’t done mobile development before.  Limited resources on these devices mean that the OS can suspend or kill your app altogether if it decides it needs to.  He covers tombstoning which is the key to Windows 8 and Windows Phone lifecycle management. He also dedicates a chapter to marketing and distributing the application you build.  From my experience with Windows Phone development this is crucial information.  You need to know how to test your application so that it is going to pass certification and present your app so that it is going to get noticed amongst thousands of other apps. The main things that I wish had been in the book explanations of more of the common controls and more complete explanation of patterns that were implemented. In the end this book is a good foundation getting exposure to the concepts that underlie this new version of the Windows platform and how it effects developers.  It isn’t a book that I would suggest for someone just getting into development with no understanding of pattern based development. del.icio.us Tags: Windows 8,O'Reilly,Ben Dewey,Book Review,Review

    Read the article

  • Ban HTML comments from your pages and views

    - by Bertrand Le Roy
    Too many people don’t realize that there are other options than <!-- --> comments to annotate HTML. These comments are harmful because they are sent to the client and thus make your page heavier than it needs to be. When doing ASP.NET, a simple drop-in replacement is server comments, which are delimited by <%-- --%> instead of <!-- -->. Those server comments are visible in your source code, but will never be rendered to the client. Here’s a simple way to sanitize a web site. From Visual Studio, hit CTRL+H to bring the search and replace dialog. Choose “Replace in Files” from the second meny on top of the dialog. Open the find options, check “use” and make sure “Regular expressions” are selected. Use “*.aspx;*.ascx;” as the file types to examine. Choose “Entire Solution” under “Look in”. Here’s the expression to search for comments: \<!--{[^-]*}--\> And here’s the replacement string: <%--\1--%> I usually use the “Find Next” and “Replace” buttons rather than the more brutal “Replace All” in order to not apply the fix blindingly. Once this is done, I do a second manual pass of finds with the same expression to make sure I didn’t miss anything.

    Read the article

  • Should interfaces inherit interfaces

    - by dreza
    Although this is a general question it is also specific to a problem I am currently experiencing. I currently have an interface specified in my solution called public interface IContextProvider { IDataContext { get; set; } IAreaContext { get; set; } } This interface is often used throughout the program and hence I have easy access to the objects I need. However at a fairly low level of a part of my program I need access to another class that will use IAreaContext and perform some operations off it. So I have created another factory interface to do this creation called: public interface IEventContextFactory { IEventContext CreateEventContext(int eventId); } I have a class that implements the IContextProvider and is injected using NinJect. The problem I have is that the area where I need to use this IEventContextFactory has access to the IContextProvider only and itself uses another class which will need this new interface. I don't want to have to instantiate this implementation of IEventContextFactory at the low level and would rather work with the IEventContextFactory interface throughout. However I also don't want to have to inject another parameter through the constructors just to have it passed through to the class that needs it i.e. // example of problem public class MyClass { public MyClass(IContextProvider context, IEventContextFactory event) { _context = context; _event = event; } public void DoSomething() { // the only place _event is used in the class is to pass it through var myClass = new MyChildClass(_event); myClass.PerformCalculation(); } } So my main question is, would this be acceptable or is it even common or good practice to do something like this (interface inherit another an interface): public interface IContextProvider : IEventContextFactory or should I consider better alternatives to achieving what I need. If I have not provided enough information to give suggestions let me know and I can provide more.

    Read the article

  • Pivotal Announces JSR-352 Compliance for Spring Batch

    - by reza_rahman
    Pivotal, the company currently funding development of the popular Spring Framework, recently announced JSR 352 (aka Batch Applications for the Java Platform) compliance for the Spring Batch project. More specifically, Spring Batch targets JSR-352 Java SE runtime compatibility rather than Java EE runtime compatibility. If you are surprised that APIs included in Java EE can pass TCKs targeted for Java SE, you should not be. Many other Java EE APIs target compatibility in Java SE environments such as JMS and JPA. You can read about Spring Batch's support for JSR-352 here as well as the Spring configuration to get JSR-352 working in Spring (typically a very low level implementation concern intended to be completely transparent to most JSR-352 users). JSR 352 is one of the few very encouraging cases of major active contribution to the Java EE standard from the Spring development team (the other major effort being Rod Johnson's co-leadership of JSR 330 along with Bob Lee). While IBM's Christopher Vignola led the spec and contributed IBM's years of highly mission critical batch processing experience from products like WebSphere Compute Grid and z/OS batch, the Spring team provided major influences to the API in particular for the chunk processing, listeners, splits and operational interfaces. The GlassFish team's own Mahesh Kannan also contributed, in particular by implementing much of the Java EE integration work for the reference implementation. This was an excellent example of multilateral engineering collaboration through the standards process. For many complex reasons it is not too hard to find evidence of less than amicable interaction between the Spring ecosystem and the Java EE standard over the years if one cares to dig deep enough. In reality most developers see Spring and Java EE as two sides of the same server-side Java coin. At the core Spring and Java EE ecosystems have always shared deep undercurrents of common user bases, bi-directional flows of ideas and perhaps genuine if not begrudging mutual respect. We can all hope for continued strength for both ecosystems and graceful high notes of collaboration via efforts like JSR 352.

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • DirectX9 / HLSL Shader Model 3 - Passing Doubles between Shaders

    - by P. Avery
    I need higher precision on a few values within my vertex and pixel shaders...I'm currently using floats, so I would like to use doubles...I've read that HLSL Model 4 has two functions to convert a double into two unsigned integers and back again( asuint() and asdouble() ). These functions are only supported on HLSL 4 and I am using DirectX 9 which will only compile HLSL Model 3 and below... How can I pass a double between shaders? here is implementation for HLSL 4: struct VS_INPUT { float2 v; }; struct PS_INPUT { uint a; uint b; uint c; uint d; }; PS_INPUT VertexShader( VS_INPUT Input ) { PS_INPUT Output = ( PS_INPUT )0; double2 vPos = mul( Input.v, mWorld ).xy; asuint( vPos.x, Output.a, Output.b ); asuint( vPos.y, Output.c, Output.d ); return Output; } float4 PixelShader( PS_INPUT Input ) { double2 vPos; vPos.x = asdouble( Input.a, Input.b ); vPos.y = asdouble( Input.c, Input.d ); ... return 1; }

    Read the article

  • Reflections based on distance from plane

    - by Andrea Benedetti
    Let's consider, for example, a surface like the volleyball court, we can see that legs and shoes of the players are reflected, with a blur effect, but body and stadium don't (as each object not near to the court). I've already made a reflection effect, but it works as a specular reflection, and I need to achieve an effect like the photo above. So, I would like to make a reflection that is based on the distance between the object and the plane, in this manner a close object would reflect more than an object that is positioned far away from the plane. What is the best way to achieve this effect? My first idea was to use the depth value (taken from the reflected camera), and use that value to blend between reflection and court. But I don't know if it's a correct way. Edit: as rendering engine I use Ogre that already provides a reflections system: reflecting the camera through a plane (obviously I can select the models to draw from the reflected camera). After a render to texture pass I can blend the reflected texture with the original plane. So, if possible, I'm looking for a way that best suits my system.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • Read only file system

    - by Jack Moon
    I'm running Ubuntu 12.10, Upon opening any shell I get the following error: /home/jack/.rbenv/libexec/rbenv-init: line 87: cannot create temp file for here-document: Read-only file system I realised this wasn't simply a rbenv issue, as any file I try to write to returns an error saying the system is Read-only. I don't know how else to describe my problem, each time I boot up the system goes through a disk check, where it supposedly fixes several errors in my disk. Here is my /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1cc4b2ab-a984-4516-ac25-6d64f5050244 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=4e0dfeae-701a-43ce-b5c6-65f15ab3d8e3 none swap sw 0 0 The entire file system is read-only. I've tried the following sudo fsck.ext4 -f /dev/sda1 which gave the following (shortened) output /dev/sda1: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda1: ***** REBOOT LINUX ***** /dev/sda1: 1257080/45268992 files (1.0% non-contiguous), 50696803/181051904 blocks

    Read the article

  • May I know what python is great at [on hold]

    - by user108437
    I am amazed by python on how tidy the code is, so i decided to learn it, and 2 days pass and I am completely in love with python, but I just code it for hobby thing like chatting robot, uploading to file hosting scripts, etc that small tools for my own daily internet life, and not much for work. I can't find a real life usage of python here. I live in Singapore, when I see in the job skill needed, from those companies hiring, only one asking for python. so I begin to be doubtful whether this skill of mine really worth my time investing it? I also heard about django, and don't know how much popular it is comparing to asp.net. So i ask your help to tell me your country and how popular python there and whether you like python or not? I really like python because of the easy scripting language (not complicated like C++) but the usefulness is almost near C++ where many open source library out there that can run both in windows and linux, so the portability is great! i just want to justify my time for learning python, as because my job does not require python, and I don't have much time at home to learn something new.

    Read the article

  • Get the Information You Need. Delivered.

    - by Get Proactive Customer Adoption Team
    Untitled Document Don’t Take Chances with Alerts—Get Hot Topics When Oracle Support publishes an alert, how do you find out about it? I can see any number of ways you might stumble onto an alert that you need. For example, if you are visiting My Oracle Support in search of answers under the Knowledge tab and happen to notice, and click on, the Alert tab the under the Knowledge Article region, you might see an alert listed for one of the products you use. There are other ways… like subscribing to one of the Oracle Blogs and finding the alert in your RSS feed because the blogger decided to write up that topic for the latest post. I’m sure your colleagues sometimes pass on critical alerts for your products, I hope, giving you the information before you needed it. Well, no matter how you learn about an alert, the important point is that you get the correct information in a timely way. Right? I must admit, the ‘magic’ required to find out via these methods makes me nervous. Rather than leave it to chance, I think you need a more reliable way to stay informed and receive alerts for your products when Oracle publishes them. You may not be aware of it, but there is a better way. Oracle Premier Support Customers can leverage the “Hot Topics E-Mail.” You select the products and topics that interest you. Based on your choices, the system sends you the support related information when Oracle Support publishes it. This way you and I can both relax, knowing you’ll have ready access to the alerts you need, and enjoy the breadth of support related information you choose to subscribe to. This can include recently updated Knowledge base articles, new bugs, and product news. If I’ve convinced you, you will want to know how to set up and subscribe to the Hot Topics E-Mail. The complete guide, Doc ID 793436.1, is waiting for you. Follow the instructions in the document, and you will always stay on top of the latest information from Oracle Support.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 18 (sys.dm_io_virtual_file_stats)

    - by Tamarick Hill
    The sys.dm_io_virtual_file_stats Dynamic Management Function is used to return IO statistic information about each of your database files on your server. As input parameters, this function takes a database_id and a file_id. If you want to return IO statistic information for all files, you can simply pass in NULL values for both of these. Let’s have a look at this function  and examine its results: SELECT db_name(database_id) DatabaseName, * FROM sys.dm_io_virtual_file_stats(NULL, NULL) The first column in the result set is the DatabaseName which is just a column I created using the db_name() system function and the database_id column from this function. Next we have a file_id which represent the ID for the file, whether it be a data file or transaction log file. The ‘sample_ms’ column represents the total time in milliseconds that the instance has been up and running. Next we have the ‘num_of_reads’, ‘num_of_bytes_read’, and later ‘num_of_writes’, and ‘num_of_bytes_written’. These columns represent the number of reads or writes and number of bytes read or written against a particular file. These columns are beneficial when determining how often a particular file is being accessed. The ‘io_stall_read_ms’ and io_stall_write_ms’ columns each represent the the total time in milliseconds that users have had to wait for reads or writes against a file respectively. The ‘io_stall’ column is the sum of both read and write io stalls. The ‘size_on_disk_bytes’ column represents the size of the respective file on your disk subsystem. Lastly the ‘file_handle’ column is simply the Windows File handle. This Dynamic Management Function is useful when you are needing to analyze your database files for the purposes of segregating high IO databases. This DMF gives you a good view of which of your database files are being accessed the most and which ones may be generating the largest IO stalls. These could be your best candidates for moving into separate IO channels. For more information about this DMF, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190326.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Passing elapsed time to the update function from the game loop

    - by Sri Harsha Chilakapati
    I want to pass the time elapsed to the update() method as this would make easy to implement the animations and time related concepts. Here's my game-loop. public void gameLoop(){ boolean running = true; long gameTime = getCurrentTime(); long elapsedTime = 0; long lastUpdateTime = 0; int loops; while (running){ loops = 0; while(getCurrentTime()>gameTime && loops<Global.MAX_FRAMESKIP){ elapsedTime = getCurrentTime() - lastUpdateTime; lastUpdateTime = getCurrentTime(); update(elapsedTime); gameTime += SKIP_STEPS; loops++; } displayGame(); } } getCurrentTime() method public long getCurrentTime(){ return (System.nanoTime()/1000000); } update() method long time = 0; public void update(long elapsedTime){ time += elapsedTime; if (time>=1000){ System.out.println("A second elapsed"); time -= 1000; } } But this is printing the message for 3 seconds. Thanks.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >