Search Results

Search found 1611 results on 65 pages for 'technique'.

Page 50/65 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • C# 5 Async, Part 3: Preparing Existing code For Await

    - by Reed
    While the Visual Studio Async CTP provides a fantastic model for asynchronous programming, it requires code to be implemented in terms of Task and Task<T>.  The CTP adds support for Task-based asynchrony to the .NET Framework methods, and promises to have these implemented directly in the framework in the future.  However, existing code outside the framework will need to be converted to using the Task class prior to being usable via the CTP. Wrapping existing asynchronous code into a Task or Task<T> is, thankfully, fairly straightforward.  There are two main approaches to this. Code written using the Asynchronous Programming Model (APM) is very easy to convert to using Task<T>.  The TaskFactory class provides the tools to directly convert APM code into a method returning a Task<T>.  This is done via the FromAsync method.  This method takes the BeginOperation and EndOperation methods, as well as any parameters and state objects as arguments, and returns a Task<T> directly. For example, we could easily convert the WebRequest BeginGetResponse and EndGetResponse methods into a method which returns a Task<WebResponse> via: Task<WebResponse> task = Task.Factory .FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Event-based Asynchronous Pattern (EAP) code can also be wrapped into a Task<T>, though this requires a bit more effort than the one line of code above.  This is handled via the TaskCompletionSource<T> class.  MSDN provides a detailed example of using this to wrap an EAP operation into a method returning Task<T>.  It demonstrates handling cancellation and exception handling as well as the basic operation of the asynchronous method itself. The basic form of this operation is typically: Task<YourResult> GetResultAsync() { var tcs = new TaskCompletionSource<YourResult>(); // Handle the event, and setup the task results... this.GetResultCompleted += (o,e) => { if (e.Error != null) tcs.TrySetException(e.Error); else if (e.Cancelled) tcs.TrySetCanceled(); else tcs.TrySetResult(e.Result); }; // Call the asynchronous method this.GetResult(); // Return the task from the TaskCompletionSource return tcs.Task; } We can easily use these methods to wrap our own code into a method that returns a Task<T>.  Existing libraries which cannot be edited can be extended via Extension methods.  The CTP uses this technique to add appropriate methods throughout the framework. The suggested naming for these methods is to define these methods as “Task<YourResult> YourClass.YourOperationAsync(…)”.  However, this naming often conflicts with the default naming of the EAP.  If this is the case, the CTP has standardized on using “Task<YourResult> YourClass.YourOperationTaskAsync(…)”. Once we’ve wrapped all of our existing code into operations that return Task<T>, we can begin investigating how the Async CTP can be used with our own code.

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • I am trying to figure out the best way to understand how to cache domain objects

    - by Brett Ryan
    I've always done this wrong, I'm sure a lot of others have too, hold a reference via a map and write through to DB etc.. I need to do this right, and I just don't know how to go about it. I know how I want my objects to be cached but not sure on how to achieve it. What complicates things is that I need to do this for a legacy system where the DB can change without notice to my application. So in the context of a web application, let's say I have a WidgetService which has several methods: Widget getWidget(); Collection<Widget> getAllWidgets(); Collection<Widget> getWidgetsByCategory(String categoryCode); Collection<Widget> getWidgetsByContainer(Integer parentContainer); Collection<Widget> getWidgetsByStatus(String status); Given this, I could decide to cache by method signature, i.e. getWidgetsByCategory("AA") would have a single cache entry, or I could cache widgets individually, which would be difficult I believe; OR, a call to any method would then first cache ALL widgets with a call to getAllWidgets() but getAllWidgets() would produce caches that match all the keys for the other method invocations. For example, take the following untested theoretical code. Collection<Widget> getAllWidgets() { Entity entity = cache.get("ALL_WIDGETS"); Collection<Widget> res; if (entity == null) { res = loadCache(); } else { res = (Collection<Widget>) entity.getValue(); } return res } Collection<Widget> loadCache() { // Get widgets from underlying DB Collection<Widget> res = db.getAllWidgets(); cache.put("ALL_WIDGETS", res); Map<String, List<Widget>> byCat = new HashMap<>(); for (Widget w : res) { // cache by different types of method calls, i.e. by category if (!byCat.containsKey(widget.getCategory()) { byCat.put(widget.getCategory(), new ArrayList<Widget>); } byCat.get(widget.getCatgory(), widget); } cacheCategories(byCat); return res; } Collection<Widget> getWidgetsByCategory(String categoryCode) { CategoryCacheKey key = new CategoryCacheKey(categoryCode); Entity ent = cache.get(key); if (entity == null) { loadCache(); } ent = cache.get(key); return ent == null ? Collections.emptyList() : (Collection<Widget>)ent.getValue(); } NOTE: I have not worked with a cache manager, the above code illustrates cache as some object that may hold caches by key/value pairs, though it's not modelled on any specific implementation. Using this I have the benefit of being able to cache all objects in the different ways they will be called with only single objects on the heap, whereas if I were to cache the method call invocation via say Spring It would (I believe) cache multiple copies of the objects. I really wish to try and understand the best ways to cache domain objects before I go down the wrong path and make it harder for myself later. I have read the documentation on the Ehcache website and found various articles of interest, but nothing to give a good solid technique. Since I'm working with an ERP system, some DB calls are very complicated, not that the DB is slow, but the business representation of the domain objects makes it very clumsy, coupled with the fact that there are actually 11 different DB's where information can be contained that this application is consolidating in a single view, this makes caching quite important.

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • HLSL Shader not working right?

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? if so then how? I'm using C++. Here is my shader. float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • Simple Navigation In Windows Phone 7

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! When moving to the mobile platform all applications need to be able to provide different views.  Navigating around views in Windows Phone 7 is a very easy thing to do.  This post will introduce you to the simplest technique for navigation in Windows Phone 7 apps. Steps: 1.     Create a new Windows Phone Application project. 2.     In the MainPage.xaml file copy the following xaml into the ContentGrid Grid:             <StackPanel Orientation="Vertical" VerticalAlignment="Center"  >                 <TextBox Name="ValueTextBox" Width="200" ></TextBox>                 <Button Width="200" Height="30" Content="Next Page" Click="Button_Click"></Button>             </StackPanel> This gives a text box for the user to enter text and a button to navigate to the next page. 3.     Copy the following event handler code to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.Navigate(new Uri( string.Format("/SecondPage.xaml?val={0}", ValueTextBox.Text), UriKind.Relative));         }   The event handler uses the NavigationService.Navigate() function.  This is what makes the navigation to another page happen.  The function takes a Uri parameter with the name of the page to navigate to and the indication that it is a relative Uri to the current page.  Note also the querystring is formatted with the value entered in the ValueTextBox control – in a similar manner to a standard web querystring. 4.     Add a new Windows Phone Portrait Page to the project named SecondPage.xaml. 5.     Paste the following XAML in the ContentGrid Grid in SecondPage.xaml:             <Button Name="GoBackButton" Width="200" Height="30" Content="Go Back" Click="Button_Click"></Button>   This provides a button to navigate back to the first page. 6.     Copy the following event handler code to the SecondPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.GoBack();         } This tells the application to go back to the previously displayed page. 7.     Add the following code to the constructor in SecondPage.xaml.cs:             this.Loaded += new RoutedEventHandler(SecondPage_Loaded); 8.     Add the following loaded event handler to the SecondPage.xaml.cs file:         void SecondPage_Loaded(object sender, RoutedEventArgs e)         {             if (NavigationContext.QueryString["val"].Length > 0)                 MessageBox.Show(NavigationContext.QueryString["val"], "Data Passed", MessageBoxButton.OK);             else                 MessageBox.Show("{Empty}!", "Data Passed", MessageBoxButton.OK);         }   This code pops up a message box displaying either the text entered on the first page or the message “{Empty}!” if no text was entered. 9.     Run the application, enter some text in the text box and click on the next page button to see the application in action:   Congratulations!  You have created a new Windows Phone 7 application with page navigation.

    Read the article

  • Data Mining Email with Thunderbird

    - by user554629
    Oracle has many formal, searchable locations:  Service Requests, BugIDs, Technical Documents. These contain the results of an investigation for a customer crash situation;  they're created after the intense work of resolution is over, and typically contain the "root cause" of the failure ... but not the methods for identifying that cause. Email is still the standby for interacting with quickly formed groups of specialists, focusing on a particular incident.Customer BI, Network and System specialists;  Oracle Tech Support, Development, Consultants; OEM Database, OS technical support.   It is a chaotic, time-oriented set of configuration, call stacks, changes, techniques to discover and repair the failure. I needed to organize that information into something cohesive to prepare the blog entry on Teradata.  My corporate email client of choice is Thunderbird.   My original (flawed) search technique: R-Click on Inbox in Thunderbird left pane, and choose Search Messages Subject:  [ teradata ] Results: A new window titled "Search Messages"Single pane of selected messagesColumn headings:  Subject  From  Date  LocationNo preview window for messages There are 673 email entries in the result ( too many )  R-click icon just above the vertical scroll bar on the rightCheck [x] Tags Click on the Tags header to sort by "Important" View contents of message by double-clickingOpens in the Thunderbird Main Window in a new Tab Not what I was looking for, close the tab and try again. There has to be a better way.  ( and there is ) I need to be more productive, eliminating duplicate-chained messages, for example.   Even the Tag "Important" that was added during the investigation phase, is "not so much" for my current task. In the "Search Messages" window, click [ Save as Search Folder ] [ teradata ]  Appears as a new folder in my Inbox. Focus on that folder and the results appear with a list of messages like every other folder in the Inbox.Only the results of the search are shown A preview window is now available for each message Sort, Select message, Cursor Down ... navigates quickly through the messages. But wait, there's more ... Click Find ( Ctrl-F) Enter a search term for the message body, like.[ LIBPATH ] The search is "sticky" ... each message you cycle through wil focus ( and highlight) the LIBPATH search term. And still more .... Reset the Tag"Important" message.   Press "1" and the tag is removed Press "4" and a new Tag "ToDo" is applied After applying all of the tags, sort by Tag for a new message order Adjust the search criteria ... R-click on the [ teradata ] search folder, and choose Properties Add additional criteria to narrow the search Some of the information I'm looking for did not contain "teradata" in the subject line. + Body  [ contains ] [ Best Practices ] That's it.  Much more efficient search.   Thank you Thunderbird.

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • Reducing WPF binding boilerplate with styles - updating the bindings themselves via styling?

    - by Eamon Nerbonne
    I'm still learning the WPF ropes, so if the following question is trivial or my approach wrong-headed, please do speak up... I'm trying to reduce boilerplate and it sounds like styles are a common way to do so. In particular: I've got a bunch of fairly mundane data-entry fields. The controls for these fields have various properties I'd like to set based on the target of the binding - pretty normal stuff. However, I'd also like to set properties of the binding itself in the style to avoid repetitiveness. For example: <TextBox Style="{StaticResource myStyle}"> <TextBox.Text> <Binding Path="..." Source="..." ValidatesOnDataErrors="True" ValidatesOnExceptions="True" UpdateSourceTrigger="PropertyChanged"> </Binding> </TextBox.Text> </TextBox> Now, is there any way to use styling - or some other technique to write the previous example somewhat like this: <TextBox Style="{StaticResource myStyle}" Text="{Binding Source=... Path=...}/> That is, is there any way to set all bindings that match a particular selection (here, on controls with the myStyle style) to validate data and to use a particular update trigger? Is it possible to template or style bindings themselves? Clearly, the second syntax is much, much shorter and more readable, and I'd love to be able to get rid of other similar boilerplate to keep my UI code comprehensible to myself :-).

    Read the article

  • WPF: Is it possible to add or modify bindings via styles or something similar?

    - by Eamon Nerbonne
    I'm still learning the WPF ropes, so if the following question is trivial or my approach wrong-headed, please do speak up... I'm trying to reduce boilerplate and it sounds like styles are a common way to do so. In particular: I've got a bunch of fairly mundane data-entry fields. The controls for these fields have various properties I'd like to set based on the target of the binding - pretty normal stuff. However, I'd also like to set properties of the binding itself in the style to avoid repetitiveness. For example: <TextBox Style="{StaticResource myStyle}"> <TextBox.Text> <Binding Path="..." Source="..." ValidatesOnDataErrors="True" ValidatesOnExceptions="True" UpdateSourceTrigger="PropertyChanged"> </Binding> </TextBox.Text> </TextBox> Now, is there any way to use styling - or some other technique to write the previous example somewhat like this: <TextBox Style="{StaticResource myStyle}" Text="{Binding Source=... Path=...}/> That is, is there any way to set all bindings that match a particular selection (here, on controls with the myStyle style) to validate data and to use a particular update trigger? Is it possible to template or style bindings themselves? Alternatively, is it possible to add the binding in the style itself? Clearly, the second syntax is much, much shorter and more readable, and I'd love to be able to get rid of other similar boilerplate to keep my UI code comprehensible to myself :-).

    Read the article

  • codingBat repeatEnd using regex

    - by polygenelubricants
    I'm trying to understand regex as much as I can, so I came up with this regex-based solution to codingbat.com repeatEnd: Given a string and an int N, return a string made of N repetitions of the last N characters of the string. You may assume that N is between 0 and the length of the string, inclusive. public String repeatEnd(String str, int N) { return str.replaceAll( ".(?!.{N})(?=.*(?<=(.{N})))|." .replace("N", Integer.toString(N)), "$1" ); } Explanation on its parts: .(?!.{N}): asserts that the matched character is one of the last N characters, by making sure that there aren't N characters following it. (?=.*(?<=(.{N}))): in which case, use lookforward to first go all the way to the end of the string, then a nested lookbehind to capture the last N characters into \1. Note that this assertion will always be true. |.: if the first assertion failed (i.e. there are at least N characters ahead) then match the character anyway; \1 would be empty. In either case, a character is always matched; replace it with \1. My questions are: Is this technique of nested assertions valid? (i.e. looking behind during a lookahead?) Is there a simpler regex-based solution?

    Read the article

  • C# Design How to Elegantly wrap a DAL class

    - by guazz
    I have an application which uses MyGeneration's dOODads ORM to generate it's Data Access Layer. dOODad works by generating a persistance class for each table in the database. It works like so: // Load and Save Employees emps = new Employees(); if(emps.LoadByPrimaryKey(42)) { emps.LastName = "Just Got Married"; emps.Save(); } // Add a new record Employees emps = new Employees(); emps.AddNew(); emps.FirstName = "Mr."; emps.LastName = "dOOdad"; emps.Save(); // After save the identity column is already here for me. int i = emps.EmployeeID; // Dynamic Query - All Employees with 'A' in thier last name Employees emps = new Employees(); emps.Where.LastName.Value = "%A%"; emps.Where.LastName.Operator = WhereParameter.Operand.Like; emps.Query.Load(); For the above example(i.e. Employees DAL object) I would like to know what is the best method/technique to abstract some of the implementation details on my classes. I don't believe that an Employee class should have Employees(the DAL) specifics in its methods - or perhaps this is acceptable? Is it possible to implement some form of repository pattern? Bear in mind that this is a high volume, perfomacne critical application. Thanks, j

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • Looking for a good world map generation algorithm

    - by FalconNL
    I'm working on a Civilization-like game and I'm looking for a good algorithm for generating Earth-like world maps. I've experimented with a few alternatives, but haven't hit on a real winner yet. One option is to generate a heightmap using perlin noise and add water at a level so that about 30% of the world is land. While perlin noise (or similar fractal-based techniques) are frequently used for terrain and is reasonably realistic, it doesn't offer much in the way of control over the number, size and position of the resulting continents, which I'd like to have from a gameplay perspective. See http://farm3.static.flickr.com/2792/4462870263_ff26c40365_o.jpg for an example (sorry, can't post pictures yet). A second option is to start with a randomly positioned one-tile seed (I'm working on a grid of tiles), determine the desired size for the continent and each turn add a tile that is horizontally or vertically adjacent to the existing continent until you've reached the desired size. Repeat for the other continents. This technique is part of the algorithm used in Civilization 4. The problem is that after placing the first few continents, it's possible to pick a starting location that's surrounded by other continents, and thus won't fit the new one. Also, it has a tendency to spawn continents too close together, resulting in something that looks more like a river than continents. See http://farm5.static.flickr.com/4069/4462870383_46e86b155c_o.jpg for an example. Does anyone happen to know a good algorithm for generating realistic continents on a grid-based map while keeping control over their number and relative sizes?

    Read the article

  • Parse MIME messages

    - by Abhimanyu
    Hi, For my new project which has email module.i need to show all the email information on web.when i m making a call to server i m getting the base64 encoded mime data. after applying base64 decoding technique i m getting the mime data as follows: /*****************Mime data start *******************************/ From [email protected] Tue Jun 23 12:01:02 2009 Date: Tue, 23 Jun 2009 12:01:02 +0530 From: Prashant R Naik <[email protected]> To: [email protected] Subject: This is a test mail Message-ID: <[email protected]> Reply-To: Prashant R Naik <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="ReaqsoxgOBHFXBhH" Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Status: RO Content-Length: 1912 Lines: 52 --ReaqsoxgOBHFXBhH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Test mail. Initiated by prashant Regards, -- Prashant R Naik Principal Technologist | Symbian & Web2.0 Geodesic Limited | www.geodesic.com Tel: +91-80-66551000 --ReaqsoxgOBHFXBhH Content-Type: image/gif Content-Disposition: attachment; filename="trash.gif" Content-Transfer-Encoding: base64 R0lGODlhEAAQANUoADJ8wTqU2DmR1TqV2DN9wTSBxTWFyTaGyTJ9wTWGyTaKzjmS1TOAxTuV 2DaFyTN8wDiN0jiO0jSAxTeKzjqS1DN8wTqR1TWFyjB4vTOBxTmO0TmS1DaKzTeJzTqV1zSA xDJ8wDqS1TeKzTF4vDF4vTiO0f///zuX2gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAACgALAAA AAAQABAAAAaDQNRpSCwWhcakcsk8mZ5Qpik5pUKvT2W1uDVWp+BiYNAImAZmz/lcDoQEFoFp QTFtTPKFQLCAREolJiURJhCCJhqAJRMiIhwmjSYdJgqUjQoODgkJJgecBp0mBgYXBx8ZBQxY UAUSDAUACLEPDwgEAAAEIBUEtygkIyMkwMMYw8EjKEEAOw== --ReaqsoxgOBHFXBhH Content-Type: image/jpeg Content-Disposition: attachment; filename="bx.jpg" Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAAR CAAUAAoDAREAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAn/xAAYEAEAAwEAAAAAAAAA AAAAAAAAGWen5//EABQBAQAAAAAAAAAAAAAAAAAAAAD/xAAUEQEAAAAAAAAAAAAAAAAAAAAA /9oADAMBAAIRAxEAPwCb4AJHym0Vp3PQJTaK07noJHgA/9k= --ReaqsoxgOBHFXBhH Content-Type: image/png Content-Disposition: attachment; filename="day_bg.png" Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAAGQAAAApCAYAAADDJIzmAAAABmJLR0QA/wD/AP+gvaeTAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH2AwCCS0kTriU2QAAAB10RVh0Q29tbWVudABD cmVhdGVkIHdpdGggVGhlIEdJTVDvZCVuAAAAXElEQVR42u3bQQEAMAgDMZiqiZtP5AwbfeQk NO/WvPtLMR0TABEQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECA CAgQARGQ7NpPPasFT+0FZPjBRwYAAAAASUVORK5CYII= --ReaqsoxgOBHFXBhH-- /*****************Mime data end *******************************/ now the problem is i have to parse this data and use it in my application.since this data is not a xml so it difficult to parse it (because parsing with some tag is easy).so any one who knows how to parse mime data help be.i m using erlang to parse this data. Thank you in advance

    Read the article

  • Cannot get assembly version for footer

    - by Jaxidian
    I'm using the automatic build versioning mentioned in this question (not the selected answer but the answer that uses the [assembly: AssemblyVersion("1.0.*")] technique). I'm doing this in the footer of my Site.Master file in MVC 2. My code for doing this is as follows: <div id="footer"> <a href="emailto:[email protected]">[email protected]</a> - Copyright © 2005-<%= DateTime.Today.Year.ToString() %>, foo LLC. All Rights Reserved. - Version: <%= Assembly.GetEntryAssembly().GetName().Version.ToString() %> </div> The exception I get is a Object reference not set to an instance of an object because GetEntryAssembly() returns NULL. My other options don't work either. GetCallingAssembly() always returns "4.0.0.0" and GetExecutingAssembly() always returns "0.0.0.0". When I go look at my DLLs, everything is versioning as I would expect. But I cannot figure out how to access it to display in my footer!!

    Read the article

  • How can I debug why this click handler never fires?

    - by tixrus
    I am going to be excrutiatingly detailed here. I am using Firefox 3.6.3 on Max OSX with Firebug 1.5.3. I have two versions of a project, one which works and one with a bug. One I downloaded and one I typed by hand. Take a guess which one doesn't work. They should be the same except that mine uses a newer version of jQuery and the files are named differently. jQuery version is not the issue. I made mine use the older jquery and I made the working one use the newer jquery. Either way, mine still broke and the downloaded one still works. I've busted my eyes trying to see how these projects are different. The only thing I don't want to do is copy the working code to the busted code because I need to be able to figure this stuff out when it is my own unique code causing similar issues. There are no errors that I can see in Firebug in my code, in fact, 2/3 of it works just fine. just the second button does nothing. So I wanted to step through. These are always eyeball errors and I really suck at seeing them. I put it on a public server. http://colleenweb.com/jqtests/ex71.html And I want to debug ex71.js If you firebug the working one and set a break point at line 13 in ex71.js the variable json has the expected values when you click on the second button. But If you firebug this one, it never gets there. I've been over the html and all the names of everything seem to match up. I also wonder why the buttons aren't right justified but that's a css thing. Please tell me what I'm missing, and more importantly, what tool/technique I could use to find these types of bugs.

    Read the article

  • Creating custom IP-STS for sharepoint foundation 2010 without ADFS

    - by user252229
    I plan to create very simple custom IP-STS for SharePoint foundation 2010 without ADFS server so anyone can integrate Windows Live ID to SharePoint foundation 2010 simply without ADFS, I can't use ADFS server because it could not install on Windows Web Server 2008 (Web Edition), also I found many article use LDAP provider but it does not exists in SharePoint Foundation too (it requires Sharepoint Server Edition). After too much searching I just found the following article and find all technique except one problem. 1) Creating Custom Claim Provider: blogs.technet.com/b/speschka/archive/2010/03/13/writing-a-custom-claims-provider-for-sharepoint-2010-part-1.aspx 2) Creating Custom STS Provider: http://blogs.msdn.com/b/chunliu/archive/2010/04/02/how-to-make-use-of-a-custom-ip-sts-with-sharepoint-2010-part-1.aspx Only one step remains: I got following error after enter username in STS site and redirect to localhost/_trust/default.aspx , ( I leave EncryptingCertificateName empty). Operation is not valid due to the current state of the object I expect to get access denied error instead of that error. 1.Is it possible anyway? 2.Can anyone help me where can I find working article to create custom IP-STS without ADFS server Any idea will help me Thanks

    Read the article

  • GWT JsonpRequestBuilder Timeout issue

    - by Anthony Kong
    I am getting time out from using JsonpRequestBuilder. The entry point code goes like this: // private static final String SERVER_URL = "http://localhost:8094/data/view/"; private static final String SERVER_URL = "http://www.google.com/calendar/feeds/[email protected]/public/full?alt=json-in-script&callback=insertAgenda&orderby=starttime&max-results=15&singleevents=true&sortorder=ascending&futureevents=true"; private static final String SERVER_ERROR = "An error occurred while " + "attempting to contact the server. Please check your network " + "connection and try again."; /** * This is the entry point method. */ public void onModuleLoad() { JsonpRequestBuilder requestBuilder = new JsonpRequestBuilder(); // requestBuilder.setTimeout(10000); requestBuilder.requestObject(SERVER_URL, new Jazz10RequestCallback()); } class Jazz10RequestCallback implements AsyncCallback<Article> { @Override public void onFailure(Throwable caught) { Window.alert("Failed to send the message: " + caught.getMessage()); } @Override public void onSuccess(Article result) { // TODO Auto-generated method stub Window.alert(result.toString()); } The article class is simply: import com.google.gwt.core.client.JavaScriptObject; public class Article extends JavaScriptObject { protected Article() {}; } The gwt page, however, always hit the onFailure() callback and show this alert: Failed to send the message. Timeout while calling <url>. Fail to see anything on the Eclipse plugin console. I tried the url and it works perfectly. Would appreciate any tip on debugging technique or suggestion

    Read the article

  • Javascript/CSS rollover menus are patented and subject to licensing?

    - by Scott B
    Very interesting finding that a client brought to my attention today regarding javascript style rollover menus. They got a call from their legal dept that they need to change the manner in which their rollover menu is activated (at the risk of having to pay license to continue using the navigation technique). Its no April fools joke, apparently this is really happening. Apparently a company named Webvention LLC has obtained enforcement rights to a patent, U.S. Patent No. 5,251,294 - "Accessing, assembling, and using bodies of Information." A menu link, that when rolled over, expands to show a list of categorized, related links. Dropdown menus and slide-out menus are examples of this patented navigational methodology. A key component of this patent is that the dropdown/slide-out action must be initiated by a rollover or mouseover event. If the dropdown/slide-out action is initiated by any other event, such as a mouse-click event, then this behavior is not in violation of the patent. Anyone ever heard of this or know of the validity of its claims? Website is here: http://www.webventionllc.com/

    Read the article

  • Producing an view of a text's revision history in Python

    - by hekevintran
    I have two versions of a piece of text and I want to produce an HTML view of its revision similar to what Google Docs or Stack Overflow displays. I need to do this in Python. I don't know what this technique is called but I assume that it has a name and hopefully there is a Python library that can do it. Version 1: William Henry "Bill" Gates III (born October 28, 1955)[2] is an American business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen. Version 2: William Henry "Bill" Gates III (born October 28, 1955)[2] is a business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen. He is American. The desired output: William Henry "Bill" Gates III (born October 28, 1955)[2] is an American business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen. He is American. Using the diff command doesn't work because it tells me which lines are different but not which columns/words are different. $ echo 'William Henry "Bill" Gates III (born October 28, 1955)[2] is an American business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen.' > oldfile $ echo 'William Henry "Bill" Gates III (born October 28, 1955)[2] is a business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen. He is American.' > newfile $ diff -u oldfile newfile --- oldfile 2010-04-30 13:32:43.000000000 -0700 +++ newfile 2010-04-30 13:33:09.000000000 -0700 @@ -1 +1 @@ -William Henry "Bill" Gates III (born October 28, 1955)[2] is an American business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen. +William Henry "Bill" Gates III (born October 28, 1955)[2] is a business magnate, philanthropist, and chairman[3] of Microsoft, the software company he founded with Paul Allen. He is American.' > oldfile

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >