Search Results

Search found 30549 results on 1222 pages for 'object orientation'.

Page 568/1222 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Session and Pop Up Window

    - by imran_ku07
     Introduction :        Session is the secure state management. It allows the user to store their information in one page and access in another page. Also it is so much powerful that store any type of object. Every user's session is identified by their cookie, which client presents to server. But unfortunately when you open a new pop up window, this cookie is not post to server with request, due to which server is unable to identify the session data for current user.         In this Article i will show you how to handle this situation,  Description :         During working in a application, i was getting an Exception saying that Session is null, when a pop window opens. After seeing the problem more closely i found that ASP.NET_SessionId cookie for parent page is not post in cookie header of child (popup) window.         Therefore for making session present in both parent and child (popup) window, you have to present same cookie. For cookie sharing i passed parent SessionID in query string,   window.open('http://abc.com/s.aspx?SASID=" & Session.SessionID &','V');           and in Application_PostMapRequestHandler application Event, check if the current request has no ASP.NET_SessionId cookie and SASID query string is not null then add this cookie to Request before Session is acquired, so that Session data remain same for both parent and popup window.    Private Sub Application_PostMapRequestHandler(ByVal sender As Object, ByVal e As EventArgs)           If (Request.Cookies("ASP.NET_SessionId") Is Nothing) AndAlso (Request.QueryString("SASID") IsNot Nothing) Then               Request.Cookies.Add(New HttpCookie("ASP.NET_SessionId", Request.QueryString("SASID")))           End If       End Sub           Now access Session in your parent and child window without any problem. How this works :          ASP.NET (both Web Form or MVC) uses a cookie (ASP.NET_SessionId) to identify the user who is requesting. Cookies are may be persistent (saved permanently in user cookies ) or non-persistent (saved temporary in browser memory). ASP.NET_SessionId cookie saved as non-persistent. This means that if the user closes the browser, the cookie is immediately removed. This is a sensible step that ensures security. That's why ASP.NET unable to identify that the request is coming from the same user. Therefore every browser instance get it's own ASP.NET_SessionId. To resolve this you need to present the same parent ASP.NET_SessionId cookie to the server when open a popup window.           You can confirm this situation by using some tools like Firebug, Fiddler,  Summary :          Hopefully you will enjoy after reading this article, by seeing that how to workaround the problem of sharing Session between different browser instances by sharing their Session identifier Cookie.

    Read the article

  • Responding to the page unload in a managed bean

    - by frank.nimphius
    Though ADF Faces provides an uncommitted data warning functionality, developers may have the requirement to respond to the page unload event within custom application code, programmed in a managed bean. The af:clientListener tag that is used in ADF Faces to listen for JavaScript and ADF Faces client component events does not provide the option to listen for the unload event. So this often recommended way of implementing JavaScript in ADF Faces does not work for this use case. To send an event from JavaScript to the server, ADF Faces provides the af:serverListener tag that you use to queue a CustomEvent that invokes method in a managed bean. While this is part of the solution, during testing, it turns out, the browser native JavaScript unload event itself is not very helpful to send an event to the server using the af:serverListener tag. The reason for this is that when the unload event fires, the page already has been unloaded and the ADF Faces AdfPage object needed to queue the custom event already returns null. So the solution to the unload page event handling is the unbeforeunload event, which I am not sure if all browsers support them. I tested IE and FF and obviously they do though. To register the beforeunload event, you use an advanced JavaScript programming technique that dynamically adds listeners to page events. <af:document id="d1" onunload="performUnloadEvent"                      clientComponent="true"> <af:resource type="javascript">   window.addEventListener('beforeunload',                            function (){performUnloadEvent()},false)      function performUnloadEvent(){   //note that af:document must have clientComponent="true" set   //for JavaScript to access the component object   var eventSource = AdfPage.PAGE.findComponentByAbsoluteId('d1');   //var x and y are dummy variables obviously needed to keep the page   //alive for as long it takes to send the custom event to the server   var x = AdfCustomEvent.queue(eventSource,                                "handleOnUnload",                                {args:'noargs'},false);   //replace args:'noargs' with key:value pairs if your event needs to   //pass arguments and values to the server side managed bean.   var y = 0; } </af:resource> <af:serverListener type="handleOnUnload"                    method="#{UnloadHandler.onUnloadHandler}"/> // rest of the page goes here … </af:document> The managed bean method called by the custom event has the following signature:  public void onUnloadHandler(ClientEvent clientEvent) {  } I don't really have a good explanation for why the JavaSCript variables "x" and "y" are needed, but this is how I got it working. To me it ones again shows how fragile custom JavaScript development is and why you should stay away from using it whenever possible. Note: If the unload event is produced through navigation in JavaServer Faces, then there is no need to use JavaScript for this. If you know that navigation is performed from one page to the next, then the action you want to perform can be handled in JSF directly in the context of the lifecycle.

    Read the article

  • WPF: Running code when Window rendering is completed

    - by Ilya Verbitskiy
    WPF is full of surprises. It makes complicated tasks easier, but at the same time overcomplicates easy  task as well. A good example of such overcomplicated things is how to run code when you’re sure that window rendering is completed. Window Loaded event does not always work, because controls might be still rendered. I had this issue working with Infragistics XamDockManager. It continued rendering widgets even when the Window Loaded event had been raised. Unfortunately there is not any “official” solution for this problem. But there is a trick. You can execute your code asynchronously using Dispatcher class.   Dispatcher.BeginInvoke(new Action(() => Trace.WriteLine("DONE!", "Rendering")), DispatcherPriority.ContextIdle, null);   This code should be added to your Window Loaded event handler. It is executed when all controls inside your window are rendered. I created a small application to prove this idea. The application has one window with a few buttons. Each button logs when it has changed its actual size. It also logs when Window Loaded event is raised, and, finally, when rendering is completed. Window’s layout is straightforward.   1: <Window x:Class="OnRendered.MainWindow" 2: xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3: xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4: Title="Run the code when window rendering is completed." Height="350" Width="525" 5: Loaded="OnWindowLoaded"> 6: <Window.Resources> 7: <Style TargetType="{x:Type Button}"> 8: <Setter Property="Padding" Value="7" /> 9: <Setter Property="Margin" Value="5" /> 10: <Setter Property="HorizontalAlignment" Value="Center" /> 11: <Setter Property="VerticalAlignment" Value="Center" /> 12: </Style> 13: </Window.Resources> 14: <StackPanel> 15: <Button x:Name="Button1" Content="Button 1" SizeChanged="OnSizeChanged" /> 16: <Button x:Name="Button2" Content="Button 2" SizeChanged="OnSizeChanged" /> 17: <Button x:Name="Button3" Content="Button 3" SizeChanged="OnSizeChanged" /> 18: <Button x:Name="Button4" Content="Button 4" SizeChanged="OnSizeChanged" /> 19: <Button x:Name="Button5" Content="Button 5" SizeChanged="OnSizeChanged" /> 20: </StackPanel> 21: </Window>   SizeChanged event handler simply traces that the event has happened.   1: private void OnSizeChanged(object sender, SizeChangedEventArgs e) 2: { 3: Button button = (Button)sender; 4: Trace.WriteLine("Size has been changed", button.Name); 5: }   Window Loaded event handler is slightly more interesting. First it scheduler the code to be executed using Dispatcher class, and then logs the event.   1: private void OnWindowLoaded(object sender, RoutedEventArgs e) 2: { 3: Dispatcher.BeginInvoke(new Action(() => Trace.WriteLine("DONE!", "Rendering")), DispatcherPriority.ContextIdle, null); 4: Trace.WriteLine("Loaded", "Window"); 5: }   As the result I had seen these trace messages.   1: Button5: Size has been changed 2: Button4: Size has been changed 3: Button3: Size has been changed 4: Button2: Size has been changed 5: Button1: Size has been changed 6: Window: Loaded 7: Rendering: DONE!   You can find the solution in GitHub.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • How would I handle input with a Game Component?

    - by Aufziehvogel
    I am currently having problems from finding my way into the component-oriented XNA design. I read an overview over the general design pattern and googled a lot of XNA examples. However, they seem to be right on the opposite site. In the general design pattern, an object (my current player) is passed to InputComponent::update(Player). This means the class will know what to do and how this will affect the game (e.g. move person vs. scroll text in a menu). Yet, in XNA GameComponent::update(GameTime) is called automatically without a reference to the current player. The only XNA examples I found built some sort of higher-level Keyboard engine into the game component like this: class InputComponent: GameComponent { public void keyReleased(Keys); public void keyPressed(Keys); public bool keyDown(Keys); public void override update(GameTime gameTime) { // compare previous state with current state and // determine if released, pressed, down or nothing } } Some others went a bit further making it possible to use a Service Locator by a design like this: interface IInputComponent { public void downwardsMovement(Keys); public void upwardsMovement(Keys); public bool pausedGame(Keys); // determine which keys pressed and what that means // can be done for different inputs in different implementations public void override update(GameTime); } Yet, then I am wondering if it is possible to design an input class to resolve all possible situations. Like in a menu a mouse click can mean "click that button", but in game play it can mean "shoot that weapon". So if I am using such a modular design with game components for input, how much logic is to be put into the InputComponent / KeyboardComponent / GamepadComponent and where is the rest handled? What I had in mind, when I heard about Game Components and Service Locator in XNA was something like this: use Game Components to run the InputHandler automatically in the loop use Service Locator to be able to switch input at runtime (i.e. let player choose if he wants to use a gamepad or a keyboard; or which shall be player 1 and which player 2). However, now I cannot see how this can be done. First code example does not seem flexible enough, as on a game pad you could require some combination of buttons for something that is possible on keyboard with only one button or with the mouse) The second code example seems really hard to implement, because the InputComponent has to know in which context we are currently. Moreover, you could imagine your application to be multi-layered and let the key-stroke go through all layers to the bottom-layer which requires a different behaviour than the InputComponent would have guessed from the top-layer. The general design pattern with passing the Player to update() does not have a representation in XNA and I also cannot see how and where to decide which class should be passed to update(). At most time of course the player, but sometimes there could be menu items you have to or can click I see that the question in general is already dealt with here, but probably from a more elobate point-of-view. At least, I am not smart enough in game development to understand it. I am searching for a rather code-based example directly for XNA. And the answer there leaves (a noob like) me still alone in how the object that should receive the detected event is chosen. Like if I have a key-up event, should it go to the text box or to the player?

    Read the article

  • Modularity through HTTP

    - by Michael Williamson
    As programmers, we strive for modularity in the code we write. We hope that splitting the problem up makes it easier to solve, and allows us to reuse parts of our code in other applications. Object-orientation is the most obvious of many attempts to get us closer to this ideal, and yet one of the most successful approaches is almost accidental: the web. Programming languages provide us with functions and classes, and plenty of other ways to modularize our code. This allows us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. It also makes it easier to reason about our code. So far, so good, but now that we’ve written our small, independent module, for example to send out e-mails to my customers, we’d like to reuse it in another application. By creating DLLs, JARs or our platform’s package container of choice, we can do just that – provided our new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. Even if a library exists, it doesn’t mean that using it going to be a pleasant experience. Say I want to use Java to write out an XML document to an output stream. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; private static final void writeDoc(Document doc, OutputStream out) throws IOException { try { Transformer t = TransformerFactory.newInstance().newTransformer(); t.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, doc.getDoctype().getSystemId()); t.transform(new DOMSource(doc), new StreamResult(out)); } catch (TransformerException e) { throw new AssertionError(e); // Can't happen! } } Most of the time, there is a good chance somebody else has written the code before, but if nobody can understand the interface to that code, nobody’s going to use it. The result is that most of the code we write is just a variation on a theme. Despite our best efforts, we’ve fallen a little short of our ideal, but the web brings us closer. If we want to send e-mails to our customers, we could write an e-mail-sending library. More likely, we’d use an existing one for our language. Even then, we probably wouldn’t have niceties like A/B testing or DKIM signing. Alternatively, we could just fire some HTTP requests at MailChimp, and get a whole slew of features without getting anywhere near the code that implements them. The web is inherently language agnostic. So long as your language can send and receive text over HTTP, and probably parse some JSON, you’re about as well equipped as anybody. Instead of building libraries for a specific language, we can build a service that almost every language can reuse. The text-based nature of HTTP also helps to limit the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but at least it is an obvious horrible mess. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. By contrast, spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem. It probably isn’t such a good idea to use it inside an inner loop that’s executed thousands of times per second. What’s more, the HTTP approach might introduce some new problems. We often need to add a thin shim to each application that we wish to communicate over HTTP. For instance, we might need to write a small plugin in PHP if we want to integrate WordPress into our system. Suddenly, instead of a system written in one language, we’re maintaining a system with several distinct languages and platforms. Even then, we should strive to avoid re-implementing the same old thing. As programmers, we consistently underestimate both the cost of building a system and the ongoing maintenance. If we allow ourselves to integrate existing applications, even if they’re in unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there.

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • A more elegant way of embedding a SOAP security header in Silverlight 4

    - by Your DisplayName here!
    The current situation with Silverlight is, that there is no support for the WCF federation binding. This means that all security token related interactions have to be done manually. Requesting the token from an STS is not really the bad part, sending it along with outgoing SOAP messages is what’s a little annoying. So far you had to wrap all calls on the channel in an OperationContextScope wrapping an IContextChannel. This “programming model” was a little disruptive (in addition to all the async stuff that you are forced to do). It seems that starting with SL4 there is more support for traditional WCF extensibility points – especially IEndpointBehavior, IClientMessageInspector. I never read somewhere that these are new features in SL4 – but I am pretty sure they did not exist in SL3. With the above mentioned interfaces at my disposal, I thought I have another go at embedding a security header – and yeah – I managed to make the code much prettier (and much less bizarre). Here’s the code for the behavior/inspector: public class IssuedTokenHeaderInspector : IClientMessageInspector {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderInspector(RequestSecurityTokenResponse rstr)     {         _rstr = rstr;     }       public void AfterReceiveReply(ref Message reply, object correlationState)     { }       public object BeforeSendRequest(ref Message request, IClientChannel channel)     {         request.Headers.Add(new IssuedTokenHeader(_rstr));                  return null;     } }   public class IssuedTokenHeaderBehavior : IEndpointBehavior {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderBehavior(RequestSecurityTokenResponse rstr)     {         if (rstr == null)         {             throw new ArgumentNullException();         }           _rstr = rstr;     }       public void ApplyClientBehavior(       ServiceEndpoint endpoint, ClientRuntime clientRuntime)     {         clientRuntime.MessageInspectors.Add(new IssuedTokenHeaderInspector(_rstr));     }       // rest omitted } This allows to set up a proxy with an issued token header and you don’t have to worry anymore with embedding the header manually with every call: var client = GetWSTrustClient();   var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Symmetric) {     AppliesTo = new EndpointAddress("https://rp/") };   client.IssueCompleted += (s, args) => {     _proxy = new StarterServiceContractClient();     _proxy.Endpoint.Behaviors.Add(new IssuedTokenHeaderBehavior(args.Result));   };   client.IssueAsync(rst); Since SL4 also support the IExtension<T> interface, you can also combine this with Nicholas Allen’s AutoHeaderExtension.

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Relative cam movement and momentum on arbitrary surface

    - by user29244
    I have been working on a game for quite long, think sonic classic physics in 3D or tony hawk psx, with unity3D. However I'm stuck at the most fundamental aspect of movement. The requirement is that I need to move the character in mario 64 fashion (or sonic adventure) aka relative cam input: the camera's forward direction always point input forward the screen, left or right input point toward left or right of the screen. when input are resting, the camera direction is independent from the character direction and the camera can orbit the character when input are pressed the character rotate itself until his direction align with the direction the input is pointing at. It's super easy to do as long your movement are parallel to the global horizontal (or any world axis). However when you try to do this on arbitrary surface (think moving along complex curved surface) with the character sticking to the surface normal (basically moving on wall and ceiling freely), it seems harder. What I want is to achieve the same finesse of movement than in mario but on arbitrary angled surfaces. There is more problem (jumping and transitioning back to the real world alignment and then back on a surface while keeping momentum) but so far I didn't even take off the basics. So far I have accomplish moving along the curved surface and the relative cam input, but for some reason direction fail all the time (point number 3, the character align slowly to the input direction). Do you have an idea how to achieve that? Here is the code and some demo so far: The demo: https://dl.dropbox.com/u/24530447/flash%20build/litesonicengine/LiteSonicEngine5.html Camera code: using UnityEngine; using System.Collections; public class CameraDrive : MonoBehaviour { public GameObject targetObject; public Transform camPivot, camTarget, camRoot, relcamdirDebug; float rot = 0; //---------------------------------------------------------------------------------------------------------- void Start() { this.transform.position = targetObject.transform.position; this.transform.rotation = targetObject.transform.rotation; } void FixedUpdate() { //the pivot system camRoot.position = targetObject.transform.position; //input on pivot orientation rot = 0; float mouse_x = Input.GetAxisRaw( "camera_analog_X" ); // rot = rot + ( 0.1f * Time.deltaTime * mouse_x ); // wrapAngle( rot ); // //when the target object rotate, it rotate too, this should not happen UpdateOrientation(this.transform.forward,targetObject.transform.up); camRoot.transform.RotateAround(camRoot.transform.up,rot); //debug the relcam dir RelativeCamDirection() ; //this camera this.transform.position = camPivot.position; //set the camera to the pivot this.transform.LookAt( camTarget.position ); // } //---------------------------------------------------------------------------------------------------------- public float wrapAngle ( float Degree ) { while (Degree < 0.0f) { Degree = Degree + 360.0f; } while (Degree >= 360.0f) { Degree = Degree - 360.0f; } return Degree; } private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; camRoot.transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } float GetOffsetAngle( float targetAngle, float DestAngle ) { return ((targetAngle - DestAngle + 180)% 360) - 180; } //---------------------------------------------------------------------------------------------------------- void OnDrawGizmos() { Gizmos.DrawCube( camPivot.transform.position, new Vector3(1,1,1) ); Gizmos.DrawCube( camTarget.transform.position, new Vector3(1,5,1) ); Gizmos.DrawCube( camRoot.transform.position, new Vector3(1,1,1) ); } void OnGUI() { GUI.Label(new Rect(0,80,1000,20*10), "targetObject.transform.up : " + targetObject.transform.up.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "target euler : " + targetObject.transform.eulerAngles.y.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "rot : " + rot.ToString()); } //---------------------------------------------------------------------------------------------------------- void RelativeCamDirection() { float input_vertical_movement = Input.GetAxisRaw( "Vertical" ), input_horizontal_movement = Input.GetAxisRaw( "Horizontal" ); Vector3 relative_forward = Vector3.forward, relative_right = Vector3.right, relative_direction = ( relative_forward * input_vertical_movement ) + ( relative_right * input_horizontal_movement ) ; MovementController MC = targetObject.GetComponent<MovementController>(); MC.motion = relative_direction.normalized * MC.acceleration * Time.fixedDeltaTime; MC.motion = this.transform.TransformDirection( MC.motion ); //MC.transform.Rotate(Vector3.up, input_horizontal_movement * 10f * Time.fixedDeltaTime); } } Mouvement code: using UnityEngine; using System.Collections; public class MovementController : MonoBehaviour { public float deadZoneValue = 0.1f, angle, acceleration = 50.0f; public Vector3 motion ; //-------------------------------------------------------------------------------------------- void OnGUI() { GUILayout.Label( "transform.rotation : " + transform.rotation ); GUILayout.Label( "transform.position : " + transform.position ); GUILayout.Label( "angle : " + angle ); } void FixedUpdate () { Ray ground_check_ray = new Ray( gameObject.transform.position, -gameObject.transform.up ); RaycastHit raycast_result; Rigidbody rigid_body = gameObject.rigidbody; if ( Physics.Raycast( ground_check_ray, out raycast_result ) ) { Vector3 next_position; //UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); next_position = GetNextPosition( raycast_result.point ); rigid_body.MovePosition( next_position ); } } //-------------------------------------------------------------------------------------------- private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } private Vector3 GetNextPosition( Vector3 current_ground_position ) { Vector3 next_position; // //-------------------------------------------------------------------- // angle = 0; // Vector3 dir = this.transform.InverseTransformDirection(motion); // angle = Vector3.Angle(Vector3.forward, dir);// * 1f * Time.fixedDeltaTime; // // if(angle > 0) this.transform.Rotate(0,angle,0); // //-------------------------------------------------------------------- next_position = current_ground_position + gameObject.transform.up * 0.5f + motion ; return next_position; } } Some observation: I have the correct input, I have the correct translation in the camera direction ... but whenever I attempt to slowly lerp the direction of the character in direction of the input, all I get is wild spin! Sad Also discovered that strafing to the right (immediately at the beginning without moving forward) has major singularity trapping on the equator!! I'm totally lost and crush (I have already done a much more featured version which fail at the same aspect)

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • How to retrieve Sharepoint data from a Windows Forms Application.

    - by Michael M. Bangoy
    In this demo I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. That's it we're ready to write our codes. Note: In this example I've added to controls on the form, the controls are Button, TextBox, Label and DataGridView. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Data.Objects; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel { public partial class Form1 : Form { // declare string url of the Sharepoint site string _context = "theurlofyoursharepointsite"; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } private void getsitetitle() {    SP.ClientContext context = new SP.ClientContext(_context);    SP.Web _site = context.Web;    context.Load(_site);    context.ExecuteQuery();    txttitle.Text = _site.Title;    context.Dispose(); } private void loadlist() { using (SP.ClientContext _clientcontext = new SP.ClientContext(_context)) {    SP.Web _web = _clientcontext.Web;    SP.ListCollection _lists = _clientcontext.Web.Lists;    _clientcontext.Load(_lists);    _clientcontext.ExecuteQuery();    DataTable dt = new DataTable();    DataColumn column;    DataRow row;    column = new DataColumn();    column.DataType = Type.GetType("System.String");    column.ColumnName = "List Title";    dt.Columns.Add(column);    foreach (SP.List listitem in _lists)    {       row = dt.NewRow();       row["List Title"] = listitem.Title;       dt.Rows.Add(row);    }       dataGridView1.DataSource = dt;    } private void cmdload_Click(object sender, EventArgs e) { getsitetitle(); loadlist(); } } That's it. Running the application and clicking the Load Button will retrieve the Title of the Sharepoint site and display it on the TextBox and also it will retrieve ALL of the Sharepoint List on that site and populate the DataGridView with the List Title. Hope this helps. Thank you.

    Read the article

  • RemoveHandler Issues with Custom Events

    - by Jeff Certain
    This is a case of things being more complicated that I thought they should be. Since it took a while to figure this one out, I thought it was worth explaining and putting all of the pieces to the answer in one spot. Let me set the stage. Architecturally, I have the notion of generic producers and consumers. These put items onto, and remove items from, a queue. This provides a generic, thread-safe mechanism to load balance the creation and processing of work items in our application. Part of the IProducer(Of T) interface is: 1: Public Interface IProducer(Of T) 2: Event ItemProduced(ByVal sender As IProducer(Of T), ByVal item As T) 3: Event ProductionComplete(ByVal sender As IProducer(Of T)) 4: End Interface Nothing sinister there, is there? In order to simplify our developers’ lives, I wrapped the queue with some functionality to manage the produces and consumers. Since the developer can specify the number of producers and consumers that are spun up, the queue code manages adding event handlers as the producers and consumers are instantiated. Now, we’ve been having some memory leaks and, in order to eliminate the possibility that this was caused by weak references to event handles, I wanted to remove them. This is where it got dicey. My first attempt looked like this: 1: For Each producer As P In Producers 2: RemoveHandler producer.ItemProduced, AddressOf ItemProducedHandler 3: RemoveHandler producer.ProductionComplete, AddressOf ProductionCompleteHandler 4: producer.Dispose() 5: Next What you can’t see in my posted code are the warnings this caused. The 'AddressOf' expression has no effect in this context because the method argument to 'AddressOf' requires a relaxed conversion to the delegate type of the event. Assign the 'AddressOf' expression to a variable, and use the variable to add or remove the method as the handler.  Now, what on earth does that mean? Well, a quick Bing search uncovered a whole bunch of talk about delegates. The first solution I found just changed all parameters in the event handler to Object. Sorry, but no. I used generics precisely because I wanted type safety, not because I wanted to use Object. More searching. Eventually, I found this forum post, where Jeff Shan revealed a missing piece of the puzzle. The other revelation came from Lian_ZA in this post. However, these two only hinted at the solution. Trying some of what they suggested led to finally getting an invalid cast exception that revealed the existence of ItemProducedEventHandler. Hold on a minute! I didn’t create that delegate. There’s nothing even close to that name in my code… except the ItemProduced event in the interface. Could it be? Naaaaah. Hmmm…. Well, as it turns out, there is a delegate created by the compiler for each event. By explicitly creating a delegate that refers to the method in question, implicitly cast to the generated delegate type, I was able to remove the handlers: 1: For Each producer As P In Producers 2: Dim _itemProducedHandler As IProducer(Of T).ItemProducedEventHandler = AddressOf ItemProducedHandler 3: RemoveHandler producer.ItemProduced, _itemProducedHandler 4:  5: Dim _productionCompleteHandler As IProducer(Of T).ProductionCompleteEventHandler = AddressOf ProductionCompleteHandler 6: RemoveHandler producer.ProductionComplete, _productionCompleteHandler 7: producer.Dispose() 8: Next That’s “all” it took to finally be able to remove the event handlers and maintain type-safe code. Hopefully, this will save you the same challenges I had in trying to figure out how to fix this issue!

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Access Master Page Controls II

    - by Bunch
    Here is another way to access master page controls. This way has a bit less coding then my previous post on the subject. The scenario would be that you have a master page with a few navigation buttons at the top for users to navigate the app. After a button is clicked the corresponding aspx page would load in the ContentPlaceHolder. To make it easier for the users to see what page they are on I wanted the clicked navigation button to change color. This would be a quick visual for the user and is useful when inevitably they are interrupted with something else and cannot get back to what they were doing for a little while. Anyway the code is something like this. Master page: <body>     <form id="form1" runat="server">     <div id="header">     <asp:Panel ID="Panel1" runat="server" CssClass="panelHeader" Width="100%">        <center>            <label style="font-size: large; color: White;">Test Application</label>        </center>       <asp:Button ID="btnPage1" runat="server" Text="Page1" PostBackUrl="~/Page1.aspx" CssClass="navButton"/>       <asp:Button ID="btnPage2" runat="server" Text="Page2" PostBackUrl="~/Page2.aspx" CssClass="navButton"/>       <br />     </asp:Panel>     <br />     </div>     <div>         <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager>         <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server">         </asp:ContentPlaceHolder>     </div>     </form> </body> Page 1: VB Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load     Dim clickedButton As Button = Master.FindControl("btnPage1")     clickedButton.CssClass = "navButtonClicked" End Sub CSharp protected void Page_Load(object sender, EventArgs e) {     Button clickedButton;     clickedButton = (Button)Master.FindControl("btnPage1");     clickedButton.CssClass = "navButtonClicked"; } CSS: .navButton {     background-color: White;     border: 1px #4e667d solid;     color: #2275a7;     display: inline;     line-height: 1.35em;     text-decoration: none;     white-space: nowrap;     width: 100px;     text-align: center;     margin-bottom: 10px;     margin-left: 5px;     height: 30px; } .navButtonClicked {     background-color:#FFFF86;     border: 1px #4e667d solid;     color: #2275a7;     display: inline;     line-height: 1.35em;     text-decoration: none;     white-space: nowrap;     width: 100px;     text-align: center;     margin-bottom: 10px;     margin-left: 5px;     height: 30px; } The idea is pretty simple, use FindControl for the master page in the page load of your aspx page. In the example I changed the CssClass for the aspx page's corresponding button to navButtonClicked which has a different background-color and makes the clicked button stand out. Technorati Tags: ASP.Net,CSS,CSharp,VB.Net

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Parameterized StreamInsight Queries

    - by Roman Schindlauer
    The changes in our APIs enable a set of scenarios that were either not possible before or could only be achieved through workarounds. One such use case that people ask about frequently is the ability to parameterize a query and instantiate it with different values instead of re-deploying the entire statement. I’ll demonstrate how to do this in StreamInsight 2.1 and combine it with a method of using subjects for dynamic query composition in a mini-series of (at least) two blog articles. Let’s start with something really simple: I want to deploy a windowed aggregate to a StreamInsight server, and later use it with different window sizes. The LINQ statement for such an aggregate is very straightforward and familiar: var result = from win in stream.TumblingWindow(TimeSpan.FromSeconds(5))               select win.Avg(e => e.Value); Obviously, we had to use an existing input stream object as well as a concrete TimeSpan value. If we want to be able to re-use this construct, we can define it as a IQStreamable: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value)); The DefineStreamable API lets us define a function, in our case from a IQStreamable (the input stream) and a TimeSpan (the window length) to an IQStreamable (the result). We can then use it like a function, with the input stream and the window length as parameters: var result = avg(stream, TimeSpan.FromSeconds(5)); Nice, but you might ask: what does this save me, except from writing my own extension method? Well, in addition to defining the IQStreamable function, you can actually deploy it to the server, to make it re-usable by another process! When we deploy an artifact in V2.1, we give it a name: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); When connected to the same server, we can now use that name to retrieve the IQStreamable and use it with our own parameters: var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); var result = averageQuery(stream, TimeSpan.FromSeconds(5)); Convenient, isn’t it? Keep in mind that, even though the function “AverageQuery” is deployed to the server, its logic will still be instantiated into each process when the process is created. The advantage here is being able to deploy that function, so another client who wants to use it doesn’t need to ask the author for the code or assembly, but just needs to know the name of deployed entity. A few words on the function signature of GetStreamable: the last type parameter (here: double) is the payload type of the result, not the actual result stream’s type itself. The returned object is a function from IQStreamable<SourcePayload> and TimeSpan to IQStreamable<double>. In the next article we will integrate this usage of IQStreamables with Subjects in StreamInsight, so stay tuned! Regards, The StreamInsight Team

    Read the article

  • Windows Phone 7 and WS-Trust

    - by Your DisplayName here!
    A question that I often hear these days is: “Can I connect a Windows Phone 7 device to my existing enterprise services?”. Well – since most of my services are typically issued token based, this requires support for WS-Trust and WS-Security on the client. Let’s see what’s necessary to write a WP7 client for this scenario. First I converted the Silverlight library that comes with the Identity Training Kit to WP7. Some things are not supported in WP7 WCF (like message inspectors and some client runtime hooks) – but besides that this was a simple copy+paste job. Very nice! Next I used the WSTrustClient to request tokens from my STS: private WSTrustClient GetWSTrustClient() {     var client = new WSTrustClient(         new WSTrustBindingUsernameMixed(),         new EndpointAddress("https://identity.thinktecture.com/…/issue.svc/mixed/username"),         new UsernameCredentials(_txtUserName.Text, _txtPassword.Password));     return client; } private void _btnLogin_Click(object sender, RoutedEventArgs e) {     _client = GetWSTrustClient();       var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Bearer)     {         AppliesTo = new EndpointAddress("https://identity.thinktecture.com/rp/")     };       _client.IssueCompleted += client_IssueCompleted;     _client.IssueAsync(rst); } I then used the returned RSTR to talk to the WCF service. Due to a bug in the combination of the Silverlight library and the WP7 runtime – symmetric key tokens seem to have issues currently. Bearer tokens work fine. So I created the following binding for the WCF endpoint specifically for WP7. <customBinding>   <binding name="mixedNoSessionBearerBinary">     <security authenticationMode="IssuedTokenOverTransport"               messageSecurityVersion="WSSecurity11 WSTrust13 WSSecureConversation13 WSSecurityPolicy12 BasicSecurityProfile10">       <issuedTokenParameters keyType="BearerKey" />     </security>     <binaryMessageEncoding />     <httpsTransport/>   </binding> </customBinding> The binary encoding is not necessary, but will speed things up a little for mobile devices. I then call the service with the following code: private void _btnCallService_Click(object sender, RoutedEventArgs e) {     var binding = new CustomBinding(         new BinaryMessageEncodingBindingElement(),         new HttpsTransportBindingElement());       _proxy = new StarterServiceContractClient(         binding,         new EndpointAddress("…"));     using (var scope = new OperationContextScope(_proxy.InnerChannel))     {         OperationContext.Current.OutgoingMessageHeaders.Add(new IssuedTokenHeader(Globals.RSTR));         _proxy.GetClaimsAsync();     } } works. download

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • Data breakpoints to find points where data gets broken

    - by raccoon_tim
    When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context. Scenario Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long. At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind. Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed. Finding the bug Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints. To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame. There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint. The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified. Conclusions Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >