Search Results

Search found 7854 results on 315 pages for 'resource dictionary'.

Page 287/315 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Modifying the SL/WIF Integration Bits to support Issued Token Credentials

    - by Your DisplayName here!
    The SL/WIF integration code that ships with the Identity Training Kit only supports Windows and UserName credentials to request tokens from an STS. This is fine for simple single STS scenarios (like a single IdP). But the more common pattern for claims/token based systems is to split the STS roles into an IdP and a Resource STS (or whatever you wanna call it). In this case, the 2nd leg requires to present the issued token from the 1st leg – this is not directly supported by the bits. But they can be easily modified to accomplish this. The Credential Fist we need a class that represents an issued token credential. Here we store the RSTR that got returned from the client to IdP request: public class IssuedTokenCredentials : IRequestCredentials {     public string IssuedToken { get; set; }     public RequestSecurityTokenResponse RSTR { get; set; }     public IssuedTokenCredentials(RequestSecurityTokenResponse rstr)     {         RSTR = rstr;         IssuedToken = rstr.RequestedSecurityToken.RawToken;     } } The Binding Next we need a binding to be used with issued token credential requests. This assumes you have an STS endpoint for mixed mode security with SecureConversation turned off. public class WSTrustBindingIssuedTokenMixed : WSTrustBinding {     public WSTrustBindingIssuedTokenMixed()     {         this.Elements.Add( new HttpsTransportBindingElement() );     } } WSTrustClient The last step is to make some modifications to WSTrustClient to make it issued token aware. In the constructor you have to check for the credential type, and if it is an issued token, store it away. private RequestSecurityTokenResponse _rstr; public WSTrustClient( Binding binding, EndpointAddress remoteAddress, IRequestCredentials credentials )     : base( binding, remoteAddress ) {     if ( null == credentials )     {         throw new ArgumentNullException( "credentials" );     }     if (credentials is UsernameCredentials)     {         UsernameCredentials usernname = credentials as UsernameCredentials;         base.ChannelFactory.Credentials.UserName.UserName = usernname.Username;         base.ChannelFactory.Credentials.UserName.Password = usernname.Password;     }     else if (credentials is IssuedTokenCredentials)     {         var issuedToken = credentials as IssuedTokenCredentials;         _rstr = issuedToken.RSTR;     }     else if (credentials is WindowsCredentials)     { }     else     {         throw new ArgumentOutOfRangeException("credentials", "type was not expected");     } } Next – when WSTrustClient constructs the RST message to the STS, the issued token header must be embedded when needed: private Message BuildRequestAsMessage( RequestSecurityToken request ) {     var message = Message.CreateMessage( base.Endpoint.Binding.MessageVersion ?? MessageVersion.Default,       IssueAction,       (BodyWriter) new WSTrustRequestBodyWriter( request ) );     if (_rstr != null)     {         message.Headers.Add(new IssuedTokenHeader(_rstr));     }     return message; } HTH

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Texture will not apply to my 3d Cube directX

    - by numerical25
    I am trying to apply a texture onto my 3d cube but it is not showing up correctly. I believe that it might some what be working because the cube is all brown which is almost the same complexion as the texture. And I did not originally make the cube brown. These are the steps I've done to add the texture I first declared 2 new varibles ID3D10EffectShaderResourceVariable* pTextureSR; ID3D10ShaderResourceView* textureSRV; I also added a variable and a struct to my shader .fx file Texture2D tex2D; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; I then grabbed the image from my local hard drive from within the .cpp file. I believe this was successful, I checked all varibles for errors, everything has a memory address. Plus I pulled resources before and never had a problem. D3DX10CreateShaderResourceViewFromFile(mpD3DDevice,L"crate.jpg",NULL,NULL,&textureSRV,NULL); I grabbed the tex2d varible from my fx file and placed into my resource varible pTextureSR = modelObject.pEffect->GetVariableByName("tex2D")->AsShaderResource(); And added the resource to the varible pTextureSR->SetResource(textureSRV); I also added the extra property to my vertex layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 24, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 36, D3D10_INPUT_PER_VERTEX_DATA, 0} }; as well as my struct struct VertexPos { D3DXVECTOR3 pos; D3DXVECTOR4 color; D3DXVECTOR3 normal; D3DXVECTOR2 texCoord; }; Then I created a new pixel shader that adds the texture to it. Below is the code in its entirety matrix Projection; matrix WorldMatrix; Texture2D tex2D; float3 lightSource; float4 lightColor = {0.5, 0.5, 0.5, 0.5}; // PS_INPUT - input variables to the pixel shader // This struct is created and fill in by the // vertex shader struct PS_INPUT { float4 Pos : SV_POSITION; float4 Color : COLOR0; float4 Normal : NORMAL; float2 Tex : TEXCOORD; }; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; //////////////////////////////////////////////// // Vertex Shader - Main Function /////////////////////////////////////////////// PS_INPUT VS(float4 Pos : POSITION, float4 Color : COLOR, float4 Normal : NORMAL, float2 Tex : TEXCOORD) { PS_INPUT psInput; // Pass through both the position and the color psInput.Pos = mul( Pos, Projection ); psInput.Normal = Normal; psInput.Tex = Tex; return psInput; } /////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////// float4 PS(PS_INPUT psInput) : SV_Target { float4 finalColor = 0; finalColor = saturate(dot(lightSource, psInput.Normal) * lightColor); return finalColor; } float4 textured( PS_INPUT psInput ) : SV_Target { return tex2D.Sample( linearSampler, psInput.Tex ); } // Define the technique technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, textured() ) ); } } Below is my CPU code. It maybe a little sloppy. But I am just adding code anywhere cause I am just experimenting and playing around. You should find most of the texture code at the bottom createObject #include "MyGame.h" #include "OneColorCube.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; ID3D10EffectMatrixVariable* pWorldMatrixVarible = NULL; ID3D10EffectVectorVariable* pLightVarible = NULL; ID3D10EffectShaderResourceVariable* pTextureSR; bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); //mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngleY = 15.0f; static float rotationAngleX = 0.0f; static D3DXMATRIX rotationXMatrix; static D3DXMATRIX rotationYMatrix; D3DXMatrixIdentity(&rotationXMatrix); D3DXMatrixIdentity(&rotationYMatrix); // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&rotationYMatrix, rotationAngleY); D3DXMatrixRotationX(&rotationXMatrix, rotationAngleX); rotationAngleY += (float)D3DX_PI * 0.0008f; rotationAngleX += (float)D3DX_PI * 0.0005f; WorldMatrix = rotationYMatrix * rotationXMatrix; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); pWorldMatrixVarible->SetMatrix((float*)&WorldMatrix); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //ViewMatrix._43 += 0.005f; // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->Draw(36,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 24, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 36, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); for(int i = 0; i < modelObject.numVertices; i += 3) { D3DXVECTOR3 out; D3DXVECTOR3 v1 = vertices[0 + i].pos; D3DXVECTOR3 v2 = vertices[1 + i].pos; D3DXVECTOR3 v3 = vertices[2 + i].pos; D3DXVECTOR3 u = v2 - v1; D3DXVECTOR3 v = v3 - v1; D3DXVec3Cross(&out, &u, &v); D3DXVec3Normalize(&out, &out); vertices[0 + i].normal = out; vertices[1 + i].normal = out; vertices[2 + i].normal = out; } //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; /* //Create indices DWORD indices[] = { 0,1,3, 1,2,3 }; ModelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * ModelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &ModelObject.pIndicesBuffer); if(FAILED(hr)) return false;*/ ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); pWorldMatrixVarible = modelObject.pEffect->GetVariableByName("WorldMatrix")->AsMatrix(); pTextureSR = modelObject.pEffect->GetVariableByName("tex2D")->AsShaderResource(); ID3D10ShaderResourceView* textureSRV; D3DX10CreateShaderResourceViewFromFile(mpD3DDevice,L"crate.jpg",NULL,NULL,&textureSRV,NULL); pLightVarible = modelObject.pEffect->GetVariableByName("lightSource")->AsVector(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; D3DXVECTOR3 vLight(1.0f, 1.0f, 1.0f); pLightVarible->SetFloatVector(vLight); modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; pTextureSR->SetResource(textureSRV); //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; } And here is my cube coordinates. I actually only added coordinates to one side. And that is the front side. To double check I flipped the cube in all directions just to make sure i didnt accidentally place the text on the incorrect side //Create vectors and put in vertices // Create vertex buffer VertexPos vertices[] = { // BACK SIDES { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 2 FRONT SIDE { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(2.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f) , D3DXVECTOR2(2.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(2.0,2.0)}, // 3 { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 4 { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 5 { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 6 {D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, };

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Nashorn ?? JDBC ? Oracle DB ?????

    - by Homma
    ???? ????????????Nashorn ?? JavaScript ??????? JDBC ? API ??????Oracle DB ?????????????????????? ?????????????????????JDBC ? API ??????????????? ????????? URL ? https://blogs.oracle.com/nashorn_ja/entry/nashorn_jdbc_1 ??? ???? ???? DB ????Oracle Linux 6.5 ?? Oracle 11.2.0.3.0 ?????????????? JDBC ????????????? DB ????????????????? ???? ?Oracle Database JDBC ???????????????????????Nashorn ?? JavaScript ?????????????????????? JDBC ? Oracle DB ??????? Nashorn ?? JavaScript ??????? JDBC ? Oracle DB ?????? JavaScript ?????? DB ???????????????? JavaScript ?????? oracle ????????? JavaScript ?????? DB ?????????????????????????????????DB ???????????? JavaScript ???????????????????????? oracle ?????????? JDBC ??????????????????????? ???? DB ?????? ?????? DB ???????????? SQL> create user test identified by "test"; SQL> grant connect, resource to test; Java 8 ??????? ???? JDK 8 ?????????????????????????????? 8u5 ???? Java 8 ??????? ???????? JDK ? yum ??????????????? # yum install ./jdk-8u5-linux-x64.rpm JDK ????????????????????? # java -version java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) Nashorn ????? oracle ??????????PATH ??????? $ vi ~/.bash_profile PATH=${PATH}:/usr/java/latest/bin export PATH $ . ~/.bash_profile jjs ?????????????????? $ jjs -fv nashorn full version 1.8.0_05-b13 ????????????? JDBC ?????????????? JDBC ?????????JDBC ?????? ??????????????????? ???????? JDBC ????????????????????????? ?????????????? JavaScript ??????????jjs ???????????????????? Nashorn ? JavaScript ?????????????????? JDBC ??????? jjs ????? -cp ?????? JDBC ????? JAR ??????????? $ vi version.js var OracleDataSource = Java.type("oracle.jdbc.pool.OracleDataSource"); var ods = new OracleDataSource(); ods.setURL("jdbc:oracle:thin:test/test@localhost:1521:orcl"); var conn = ods.getConnection(); var meta = conn.getMetaData(); print("JDBC driver version is " + meta.getDriverVersion()); $ jjs -cp ${ORACLE_HOME}/jdbc/lib/ojdbc6.jar version.js JDBC driver version is 11.2.0.3.0 ??????JavaScript ???????? JDBC ?????????? (11.2.0.3.0) ????????? Java.type() ??????? JavaClass ??????? new ????? Java ??????????????????????????? Java ???????????????????? ????????????????????????????????????????????????????? ?????????????????????????????????????? Java ??????????????? JavaScript ???????????????????????????????? ?????? ???????????????? jjs ???????????Nashorn ??????????????jjs ??????????????????????????? $ jjs -cp ${ORACLE_HOME}/jdbc/lib/ojdbc6.jar jjs> var OracleDataSource = Java.type("oracle.jdbc.pool.OracleDataSource"); jjs> var ods = new OracleDataSource(); jjs> ods.setURL("jdbc:oracle:thin:test/test@localhost:1521:orcl"); null jjs> var conn = ods.getConnection(); jjs> var meta = conn.getMetaData(); jjs> print("JDBC driver version is " + meta.getDriverVersion()); JDBC driver version is 11.2.0.3.0 ???????? JDBC ?????????? (11.2.0.3.0) ????????? ?????????????????????????????????????????????????????????JDBC ?????????????????????? ??? Nashorn ???????? JDBC ? API ????????????? API ???????????????? ???????? JavaScript ?????????????????????????????????? ???????????? JDBC ? DB ???????????????? JDBC ??????????????????????????? ???? Oracle Database JDBC?????? 11g????2(11.2) ??????? jjs ?????????? Nashorn User's Guide Java Scripting Programmer's Guide Oracle Nashorn: A Next-Generation JavaScript Engine for the JVM

    Read the article

  • Cannot turn on "Network Discovery and File Sharing" when Windows Firewall is enabled

    - by Cheeso
    I have a problem similar to this one. Windows Firewall prevents File and Printer sharing from working and Why does File and Printer Sharing keep turning off in Windows 7? I cannot turn on Network Discovery. This is Windows 7 Home Premium, x64. It's a Dell XPS 1340 and Windows came installed from the OEM. This used to work. Now it doesn't. I don't know what has changed. In windows Explorer, the UI looks like this: When I click the yellow panel that says "Click to change...", the panel disappears, then immediately reappears, with exactly the same text. If I go through the control panel "Network and Sharing Center" thing, the UI looks like this: If I tick the box to "turn on network discovery", the "Save Changes" button becomes enabled. If I then click that button, the dialog box just closes, with no message or confirmation. Re-opening the same dialog box shows that Network Discovery has not been turned on. If I turn off Windows Firewall, I can then turn on Network Discovery via either method. The machine is connected to a wireless home network, via a router. The network is marked as "Home Network" in the Network and Sharing Center, which I think corresponds to the "Private" profile in Windows Firewall Advanced Settings app. (Confirm?) The PC is not part of a domain, and has never been part of a domain. The machine is not bridging any networks. There is a regular 100baseT connector but I have the network adapter for that disabled in Windows. Something else that seems odd. Within Windows Firewall Advanced Settings, there are no predefined rules available. If I click the "New Rule...." Action on the action pane, the "Predefined" option is greyed out. like this: In order to attempt to allow the network discovery protocols through on the private network, I hand-coded a bunch of rules, intending to allow the necessary UPnP and WDP protocols supporting network discovery. I copied them from a working Windows 7 Ultimate PC, running on the same network. This did not work. Even with the hand-coded rules, I still cannot turn on Network Discovery. I looked on the interwebs, and the only solution that appears to work is a re-install of Windows. Seriously? If I try netsh advfirewall firewall set rule group="Network Discovery" new enable=Yes ...it says "No rules match the specified criteria" EDIT: by the way, these services are running. DNS Client Function Discovery Resource Publication SSDP Discovery UPnP Device Host in any case, since it works with no firewall, I would assume all necessary services are present and running. The issue is a firewall thing, but I don't know how to diagnose further, or fix it. Q1: Is there a way to definitively insure the correct holes are punched through the Windows Firewall to allow Network Discovery to function? Q2: Should I expect the "predefined" firewall rules to be greyed out? Q3: Why did this change?

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • When searching in Outlook, encountered this error "Instant Search encountered a problem while trying

    - by Imagineer
    Sometime when searching for certain keyword, I get this error "Instant Search encountered a problem while trying to display search results. Modifying your query may resolve this problem." I have enabled Outlook logging to determine what is the error as suggested by someone in other forum. but I haven't had a clue how to decipher it. 2010.05.11 09:38:10 <<<< Logging Started (level is LTF_TRACE) >>>> 2010.05.11 09:38:10 HELPER::Initialize called 2010.05.11 09:38:10 Initializing: Finding a Transport 2010.05.11 09:38:10 MAPI XP Call: XPProviderInit in EMSMDB.DLL, hr = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: TransportLogon, hr = 0x8004011d 2010.05.11 09:38:10 MAPI XP Call: Shutdown, hr = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: XPProviderInit in EMSMDB.DLL, hr = 0x00000000 2010.05.11 09:38:10 MAPI Status: (-- -- ---/--- -- ---) 2010.05.11 09:38:10 MAPI XP Call: TransportLogon, hr = 0x00000000 2010.05.11 09:38:10 Initializing: Found a transport, Error code = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: AddressTypes, hr = 0x00000000, cAddrs = 4, cUids = 1 2010.05.11 09:38:10 MAPI XP Call: RegisterOptions, hr = 0x00000000, cOptions = 2 2010.05.11 09:38:10 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:10 MAPI XP Call: TransportNotify(BEGIN_IN|BEGIN_OUT), hr = 0x00000000 2010.05.11 09:38:10 HELPER::Initialize done, Error code = 0x00000000 2010.05.11 09:38:10 HELPER::GetCapabilities called, Error code = 0x00000000 2010.05.11 09:38:10 Microsoft Exchange: Synch operation started (flags = 00000031) 2010.05.11 09:38:10 Microsoft Exchange: StartImport(flags = 00000000, max msg = ffffffff): full items 2010.05.11 09:38:10 Microsoft Exchange: UploadItems: 0 messages to send 2010.05.11 09:38:11 Starting the Spooling Cycle 2010.05.11 09:38:11 MAPI Status: (IN fl ---/OUT -- ---) 2010.05.11 09:38:11 MAPI XP Call: FlushQueues, hr = 0x00000000, ulFlushFlags = 0x0000001c 2010.05.11 09:38:11 MAPI XP Call: Poll, hr = 0x00000000, cPollCount = 855 2010.05.11 09:38:11 Progress: Receiving message (message 1 out of 856, size unknown) 2010.05.11 09:38:11 Downloading one message 2010.05.11 09:38:11 Transport tightly coupled with store, download is NOOP 2010.05.11 09:38:11 Downloading done, Error code = 0x8004010f 2010.05.11 09:38:11 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:11 FINISHED MAPI TASK 2010.05.11 09:38:11 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 09:38:11 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 09:38:11 EXECUTING EndSession MAPI TASK 2010.05.11 09:38:11 Starting the Simplified Transfer Cycle 2010.05.11 09:38:11 MAPI XP Call: Poll, hr = 0x00000000, iMsgsReceived = 0, cPollCount = 855 2010.05.11 09:38:11 Progress: Receiving message (message 1 out of 856, size unknown) 2010.05.11 09:38:11 Downloading one message 2010.05.11 09:38:11 MAPI Status: (IN -- act/OUT -- ---) 2010.05.11 09:38:11 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:11 Downloading done, Error code = 0x8004010f 2010.05.11 09:38:11 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 09:38:11 FINISHED MAPI TASK 2010.05.11 09:38:11 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 09:38:11 Microsoft Exchange: Synch operation completed 2010.05.11 10:08:15 Microsoft Exchange: Synch operation started (flags = 00000031) 2010.05.11 10:08:15 Microsoft Exchange: StartImport(flags = 00000000, max msg = ffffffff): full items 2010.05.11 10:08:15 Microsoft Exchange: UploadItems: 0 messages to send 2010.05.11 10:08:16 Starting the Spooling Cycle 2010.05.11 10:08:16 MAPI Status: (IN fl ---/OUT -- ---) 2010.05.11 10:08:16 MAPI XP Call: FlushQueues, hr = 0x00000000, ulFlushFlags = 0x0000001c 2010.05.11 10:08:16 MAPI XP Call: Poll, hr = 0x00000000, cPollCount = 858 2010.05.11 10:08:16 Progress: Receiving message (message 1 out of 859, size unknown) 2010.05.11 10:08:16 Downloading one message 2010.05.11 10:08:16 Transport tightly coupled with store, download is NOOP 2010.05.11 10:08:16 Downloading done, Error code = 0x8004010f 2010.05.11 10:08:16 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 10:08:16 FINISHED MAPI TASK 2010.05.11 10:08:16 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 10:08:16 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 10:08:16 EXECUTING EndSession MAPI TASK 2010.05.11 10:08:16 Starting the Simplified Transfer Cycle 2010.05.11 10:08:16 MAPI XP Call: Poll, hr = 0x00000000, iMsgsReceived = 0, cPollCount = 858 2010.05.11 10:08:16 Progress: Receiving message (message 1 out of 859, size unknown) 2010.05.11 10:08:16 Downloading one message 2010.05.11 10:08:16 MAPI Status: (IN -- act/OUT -- ---) 2010.05.11 10:08:16 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 10:08:16 Downloading done, Error code = 0x8004010f 2010.05.11 10:08:16 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 10:08:16 FINISHED MAPI TASK 2010.05.11 10:08:16 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 10:08:16 Microsoft Exchange: Synch operation completed 2010.05.11 10:09:48 HELPER::Uninitialize called 2010.05.11 10:09:48 MAPI Status: (-- -- ---/--- -- ---) 2010.05.11 10:09:48 MAPI XP Call: TransportNotify(END_IN|END_OUT), hr = 0x00000000 2010.05.11 10:09:48 MAPI XP Call: TransportLogoff in EMSMDB.DLL, hr = 0x00000000 2010.05.11 10:09:48 MAPI XP Call: Shutdown, hr = 0x00000000 2010.05.11 10:09:48 Resource manager terminated I'm running Outlook 2007 SP1 in Citrix environment and should be running in Cache Mode. In my Outlook Tools-Options-Search Option, there is nothing under indexing. Any help is greatly appreciated! Thank you.

    Read the article

  • Useful Command-line Commands on Windows

    - by Sung Meister
    The aim for this Wiki is to promote using a command to open up commonly used applications without having to go through many mouse clicks - thus saving time on monitoring and troubleshooting Windows machines. Answer entries need to specify Application name Commands Screenshot (Optional) Shortcut to commands && - Command Chaining %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP) appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs") appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane) arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP) at - Schedule tasks either locally or remotely without using Scheduled Tasks bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files calc - Calculator chkdsk - Check/Fix the disk surface for physical errors or bad sectors cipher - Displays or alters the encryption of directories [files] on NTFS partitions cleanmgr.exe - Disk Cleanup clip - Redirects output of command line tools to the Windows clipboard cls - clear the command line screen cmd /k - Run command with command extensions enabled color - Sets the default console foreground and background colors in console command.com - Default Operating System Shell compmgmt.msc - Computer Management control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center control keyboard - Keyboard Properties control mouse(or main.cpl) - Mouse Properties control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog control userpasswords2 - Opens the classic User Accounts dialog desk.cpl - opens the display properties devmgmt.msc - Device Manager diskmgmt.msc - Disk Management diskpart - Disk management from the command line dsa.msc - Opens active directory users and computers dsquery - Finds any objects in the directory according to criteria dxdiag - DirectX Diagnostic Tool eventvwr - Windows Event Log (Event Viewer) explorer . - Open explorer with the current folder selected. explorer /e, . - Open explorer, with folder tree, with current folder selected. F7 - View command history find - Searches for a text string in a file or files findstr - Find a string in a file firewall.cpl - Opens the Windows Firewall settings fsmgmt.msc - Shared Folders fsutil - Perform tasks related to FAT and NTFS file systems ftp - Transfers files to and from a computer running an FTP server service getmac - Shows the mac address(es) of your network adapter(s) gpedit.msc - Group Policy Editor gpresult - Displays the Resultant Set of Policy (RSoP) information for a target user and computer httpcfg.exe - HTTP Configuration Utility iisreset - To restart IIS InetMgr.exe - Internet Information Services (IIS) Manager 7 InetMgr6.exe - Internet Information Services (IIS) Manager 6 intl.cpl - Regional and Language Options ipconfig - Internet protocol configuration lusrmgr.msc - Local Users and Groups Administrator msconfig - System Configuration notepad - Notepad? ;) mmsys.cpl - Sound/Recording/Playback properties mode - Configure system devices more - Displays one screen of output at a time mrt - Microsoft Windows Malicious Software Removal Tool mstsc.exe - Remote Desktop Connection nbstat - displays protocol statistics and current TCP/IP connections using NBT ncpa.cpl - Network Connections netsh - Display or modify the network configuration of a computer that is currently running netstat - Network Statistics net statistics - Check computer up time net stop - Stops a running service. net use - Connects a computer to or disconnects a computer from a shared resource, or displays information about computer connections odbcad32.exe - ODBC Data Source Administrator pathping - A traceroute that collects detailed packet loss stats perfmon - Opens Reliability and Performance Monitor ping - Determine whether a remote computer is accessible over the network powercfg.cpl - Power management control panel applet quser - Display information about user sessions on a terminal server qwinsta - See disconnected remote desktop sessions reg.exe - Console Registry Tool for Windows regedit - Registry Editor rasdial - Connects to a VPN or a dialup network robocopy - Backup/Restore/Copy large amounts of files reliably rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login) runas - Run specific tools and programs with different permissions than the user's current logon provides sc - Manage anything you want to do with services. schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system. secpol.msc - Local Security Settings services.msc - Services control panel set - Displays, sets, or removes cmd.exe environment variables. set DIRCMD - Preset dir parameter in cmd.exe start - Starts a separate window to run a specified program or command start. - opens the current directory in the Windows Explorer. shutdown.exe - Shutdown or Reboot a local/remote machine subst.exe - Associates a path with a drive letter, including local drives systeminfo -Displays a comprehensive information about the system taskkill - terminate tasks by process id (PID) or image name tasklist.exe - List Processes on local or a remote machine taskmgr.exe - Task Manager telephon.cpl - Telephone and Modem properties timedate.cpl - Date and Time title - Change the title of the CMD window you have open tracert - Trace route wmic - Windows Management Instrumentation Command-line winver.exe - Find Windows Version wscui.cpl - Windows Security Center wuauclt.exe - Windows Update AutoUpdate Client

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • Cannot join Win7 workstations to Win2k8 domain

    - by wfaulk
    I am trying to connect a Windows 7 Ultimate machine to a Windows 2k8 domain and it's not working. I get this error: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "example.local": The query was for the SRV record for _ldap._tcp.dc._msdcs.example.local The following domain controllers were identified by the query: dc1.example.local dc2.example.local However no domain controllers could be contacted. Common causes of this error include: Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. The client is in an office connected remotely via MPLS to the data center where our domain controllers exist. I don't seem to have anything blocking connectivity to the DCs, but I don't have total control over the MPLS circuit, so it's possible that there's something blocking connectivity. I have tried multiple clients (Win7 Ultimate and WinXP SP3) in the one office and get the same symptoms on all of them. I have no trouble connecting to either of the domain controllers, though I have, admittedly, not tried every possible port. ICMP, LDAP, DNS, and SMB connections all work fine. Client DNS is pointing to the DCs, and "example.local" resolves to the two IP addresses of the DCs. I get this output from the NetLogon Test command line utility: C:\Windows\System32>nltest /dsgetdc:example.local Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN I have also created a separate network to emulate that office's configuration that's connected to the DC network via LAN-to-LAN VPN instead of MPLS. Joining Windows 7 computers from that remote network works fine. The only difference I can find between the two environments is the intermediate connectivity, but I'm out of ideas as to what to test or how to do it. What further steps should I take? (Note that this isn't actually my client workstation and I have no direct access to it; I'm forced to do remote hands access to it, which makes some of the obvious troubleshooting methods, like packet sniffing, more difficult. If I could just set up a system there that I could remote into, I would, but requests to that effect have gone unanswered.) 2011-08-25 update: I had DCDIAG.EXE run on a client attempting to join the domain: C:\Windows\System32>dcdiag /u:example\adminuser /p:********* /s:dc2.example.local Directory Server Diagnosis Performing initial setup: Ldap search capabality attribute search failed on server dc2.example.local, return value = 81 This sounds like it was able to connect via LDAP, but the thing that it was trying to do failed. But I don't quite follow what it was trying to do, much less how to reproduce it or resolve it. 2011-08-26 update: Using LDP.EXE to try and make an LDAP connection directly to the DCs results in these errors: ld = ldap_open("10.0.0.1", 389); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 389); Error <0x51: Fail to connect to 10.0.0.2. ld = ldap_open("10.0.0.1", 3268); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 3268); Error <0x51: Fail to connect to 10.0.0.2. This would seem to point fingers at LDAP connections being blocked somewhere. (And 0x51 == 81, which was the error from DCDIAG.EXE from yesterday's update.) I could swear I tested this using TELNET.EXE weeks ago, but now I'm thinking that I may have assumed that its clearing of the screen was telling me that it was waiting and not that it had connected. I'm tracking down LDAP connectivity problems now. This update may become an answer.

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

  • Custom SNMP Cacti Data Source fails to update

    - by Andrew Wilkinson
    I'm trying to create a custom SNMP datasource for Cacti but despite everything I can check being correct, it is not creating the rrd file, or updating it even when I create it. Other, standard SNMP sources are working correctly so it's not SNMP or permissions that are the problem. I've created a new Data Query, which when I click on "Verbose Query" on the device screen returns the following: + Running data query [10]. + Found type = '3' [SNMP Query]. + Found data query XML file at '/volume1/web/cacti/resource/snmp_queries/syno_volume_stats.xml' + XML file parsed ok. + missing in XML file, 'Index Count Changed' emulated by counting oid_index entries + Executing SNMP walk for list of indexes @ '.1.3.6.1.2.1.25.2.3.1.3' Index Count: 8 + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' value: 'Physical memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' value: 'Virtual memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' value: 'Memory buffers' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' value: 'Cached memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' value: 'Swap space' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' value: '/' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' value: '/volume1' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' value: '/opt' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' results: '1' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' results: '3' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' results: '6' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' results: '7' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' results: '10' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' results: '31' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' results: '32' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' results: '33' + Located input field 'index' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.3' + Found item [index='Physical memory'] index: 1 [from value] + Found item [index='Virtual memory'] index: 3 [from value] + Found item [index='Memory buffers'] index: 6 [from value] + Found item [index='Cached memory'] index: 7 [from value] + Found item [index='Swap space'] index: 10 [from value] + Found item [index='/'] index: 31 [from value] + Found item [index='/volume1'] index: 32 [from value] + Found item [index='/opt'] index: 33 [from value] + Located input field 'volsizeunit' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.4' + Found item [volsizeunit='1024 Bytes'] index: 1 [from value] + Found item [volsizeunit='1024 Bytes'] index: 3 [from value] + Found item [volsizeunit='1024 Bytes'] index: 6 [from value] + Found item [volsizeunit='1024 Bytes'] index: 7 [from value] + Found item [volsizeunit='1024 Bytes'] index: 10 [from value] + Found item [volsizeunit='4096 Bytes'] index: 31 [from value] + Found item [volsizeunit='4096 Bytes'] index: 32 [from value] + Found item [volsizeunit='4096 Bytes'] index: 33 [from value] + Located input field 'volsize' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.5' + Found item [volsize='1034712'] index: 1 [from value] + Found item [volsize='3131792'] index: 3 [from value] + Found item [volsize='1034712'] index: 6 [from value] + Found item [volsize='775904'] index: 7 [from value] + Found item [volsize='2097080'] index: 10 [from value] + Found item [volsize='612766'] index: 31 [from value] + Found item [volsize='1439812394'] index: 32 [from value] + Found item [volsize='1439812394'] index: 33 [from value] + Located input field 'volused' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.6' + Found item [volused='1022520'] index: 1 [from value] + Found item [volused='1024096'] index: 3 [from value] + Found item [volused='32408'] index: 6 [from value] + Found item [volused='775904'] index: 7 [from value] + Found item [volused='1576'] index: 10 [from value] + Found item [volused='148070'] index: 31 [from value] + Found item [volused='682377865'] index: 32 [from value] + Found item [volused='682377865'] index: 33 [from value] AS you can see it appears to be returning the correct data. I've also set up data templates and graph templates to display the data. The create graphs for a device screen shows the correct data, and when selecting one row can clicking create a new data source and graph are created. Unfortunately the data source is never updated. Increasing the poller log level shows that it appears to not even be querying the data source, despite it being used? What should my next steps to debug this issue be?

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • Postfix on Snow Leopard unable to send MIME emails, including header contents in message body

    - by devvy
    I configured postfix on snow leopard by adding the following line to /etc/hostconfig: MAILSERVER=-YES- I then configured postfix to relay through my ISP's SMTP server. I added the following two lines in their respective places within /etc/postfix/main.cf: myhostname = 1and1.com relayhost = shawmail.vc.shawcable.net I then have a simple PHP mail function wrapper as follows: send_email("[email protected]", "[email protected]", "Test Email", "<p>This is a simple HTML email</p>"); echo "Done"; function send_email($from,$to,$subject,$message){ $header="From: <".$from."> "; $header.= 'MIME-Version: 1.0' . " "; $header.= 'Content-type: text/html; charset=iso-8859-1' . " "; $send_mail=mail($to,$subject,$message,$header); if(!$send_mail){ echo "ERROR"; } } With this, I am receiving an e-mail that appears to be improperly formatted. The message header is showing up in the body of the e-mail. The raw message content is as follows: Return-Path: <[email protected]> Delivery-Date: Tue, 27 Apr 2010 18:12:48 -0400 Received: from idcmail-mo2no.shaw.ca (idcmail-mo2no.shaw.ca [64.59.134.9]) by mx.perfora.net (node=mxus2) with ESMTP (Nemesis) id 0M4XlU-1NCtC81GVY-00z5UN for [email protected]; Tue, 27 Apr 2010 18:12:48 -0400 Message-Id: <[email protected]> Received: from pd6ml3no-ssvc.prod.shaw.ca ([10.0.153.149]) by pd6mo1no-svcs.prod.shaw.ca with ESMTP; 27 Apr 2010 16:12:47 -0600 X-Cloudmark-SP-Filtered: true X-Cloudmark-SP-Result: v=1.0 c=1 a=VphdPIyG4kEA:10 a=hATtCjKilyj9ZF5m5A62ag==:17 a=mC_jT1gcAAAA:8 a=QLyc3QejAAAA:8 a=DGW4GvdtALggLTu6w9AA:9 a=KbDtEDGyCi7QHcNhDYYwsF92SU8A:4 a=uch7kV7NfGgA:10 a=5ZEL1eDBWGAA:10 Received: from unknown (HELO 1and1.com) ([24.84.196.104]) by pd6ml3no-dmz.prod.shaw.ca with ESMTP; 27 Apr 2010 16:12:48 -0600 Received: by 1and1.com (Postfix, from userid 70) id BB08D14ECFC; Tue, 27 Apr 2010 15:12:47 -0700 (PDT) To: [email protected] Subject: Test Email X-PHP-Originating-Script: 501:test.php Date: Tue, 27 Apr 2010 18:12:48 -0400 X-UI-Junk: AutoMaybeJunk +30 (SPA); V01:LYI2BGRt:7TwGx5jxe8cylj5nOTae9JQXYqoWvG2w4ZSfwYCXmHCH/5vVNCE fRD7wNNM86txwLDTO522ZNxyNHhvJUK9d2buMQuAUCMoea2jJHaDdtRgkGxNSkO2 v6svm0LsZikLMqRErHtBCYEWIgxp2bl0W3oA3nIbtfp3li0kta27g/ZjoXcgz5Sw B8lEqWBqKWMSta1mCM+XD/RbWVsjr+LqTKg== Envelope-To: [email protected] From: <[email protected]> MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Message-Id: <[email protected]> Date: Tue, 27 Apr 2010 15:12:47 -0700 (PDT) <p>This is a simple HTML email</p> And here are the contents of my /var/log/mail.log file after sending the email: Apr 27 15:29:01 User-iMac postfix/qmgr[705]: 74B1514EDDF: removed Apr 27 15:29:30 User-iMac postfix/pickup[704]: 25FBC14EDF0: uid=70 from=<_www> Apr 27 15:29:30 User-iMac postfix/master[758]: fatal: open lock file pid/master.pid: unable to set exclusive lock: Resource temporarily unavailable Apr 27 15:29:30 User-iMac postfix/cleanup[745]: 25FBC14EDF0: message-id=<[email protected]> Apr 27 15:29:30 User-iMac postfix/qmgr[705]: 25FBC14EDF0: from=<[email protected]>, size=423, nrcpt=1 (queue active) Apr 27 15:29:30 User-iMac postfix/smtp[747]: 25FBC14EDF0: to=<[email protected]>, relay=shawmail.vc.shawcable.net[64.59.128.135]:25, delay=0.21, delays=0.01/0/0.1/0.1, dsn=2.0.0, status=sent (250 ok: Message 25784419 accepted) Apr 27 15:29:30 User-iMac postfix/qmgr[705]: 25FBC14EDF0: removed Two other people in the office have followed the exact same process and are running the exact same script, version of snow leopard, php, etc. and everything is working fine for them. I've even copied their config files to my machine, restarted postfix, restarted apache, all to no avail. Does anyone know what steps I could take to resolve the issue? This is boggling my mind... Thanks

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • Moved DNS and Email Hosting, Now Can't Send/Receive To/From Domains Hosted on Previous Host

    - by maxfinis
    Our company had 4 domains whose emails and DNS were hosted by one company, and then we moved the email and DNS hosting for 3 of the 4 domains to a new company. Now, the 3 domains that were moved can't send or receive emails to and from the one domain still left on the old server. All other email functions work fine for all 4 domains. There are no bouncebacks, error messages, or emails stuck in queue, and no evidence of these missing emails hitting the new servers. The new hosting company confirms that everything is fine on their end, and assures me that it's most likely an old zone file still remaining on the old nameserver, and so the emails sent from the old host is routed to what it believes is still the authoritative nameserver. Because the old zone file's MX records still contain the old resource, the requests never leave the old nameserver to go online to do a fresh search for the real (new) authoritative nameserver. The compounding problem is that the old company is rather inept and doesn't seem to have the technical expertise to identify the problem, much less fix it. (I know, I know.) Is the problem truly that this old zone file just needs to be deleted from the old company's nameserver? If so, what's the best way for me to describe this to them? If not, what do you think could be the issue? Any help is much appreciated. I'm not in IT, so all this is new to me. I know it seems weird for me (the client) to have to do this legwork, but I just want to get this resolved. Here's what I've done: Ran dig to verify that the old server's MX records still point to the old authoritative server, instead of going online to do a fresh search: ~$ dig @old.nameserver.com domainthatwasmoved.com mx ; << DiG 9.6.0-APPLE-P2 << @old.nameserver.com domainThatWasMoved.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 61227 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;domainthatwasmoved.com. IN MX ;; ANSWER SECTION: domainthatwasmoved.com. 3600 IN MX 10 mail.oldmailserver.com. ;; ADDITIONAL SECTION: mail.oldmailserver.com. 3600 IN A 65.198.191.5 ;; Query time: 29 msec ;; SERVER: 65.198.191.5#53(65.198.191.5) ;; WHEN: Sun Dec 26 16:59:22 2010 ;; MSG SIZE rcvd: 88 Ran dig to try to see where the new hosting company's servers look when emails are sent from the 3 domains that were moved, and got refused: ~$ dig @new.nameserver.net domainStillAtOldHost.com mx ; << DiG 9.6.0-APPLE-P2 << @new.nameserver.net domainStillAtOldHost.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: REFUSED, id: 31599 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;domainStillAtOldHost.com. IN MX ;; Query time: 31 msec ;; SERVER: 216.201.128.10#53(216.201.128.10) ;; WHEN: Sun Dec 26 17:00:14 2010 ;; MSG SIZE rcvd: 34

    Read the article

  • SQLExpress service unable to start Error code 17053

    - by Chris Sobolewski
    A user was instructed by their software support to upgrade a program and install SQLExpress as part of the installation process. Since that time, the service has been able to start, citing error 17053, which appears to be an authentication issue. Here is the error log: 2011-01-11 13:17:45.50 Server Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2) 2011-01-11 13:17:45.50 Server (c) 2005 Microsoft Corporation. 2011-01-11 13:17:45.50 Server All rights reserved. 2011-01-11 13:17:45.50 Server Server process ID is 3332. 2011-01-11 13:17:45.50 Server Authentication mode is WINDOWS-ONLY. 2011-01-11 13:17:45.50 Server Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG'. 2011-01-11 13:17:45.52 Server This instance of SQL Server last reported using a process ID of 2332 at 11/10/2010 2:15:24 PM (local) 11/10/2010 7:15:24 PM (UTC). This is an informational message only; no user action is required. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 2011-01-11 13:17:45.52 Server Registry startup parameters: 2011-01-11 13:17:45.52 Server -d c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf 2011-01-11 13:17:45.52 Server -e c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG 2011-01-11 13:17:45.52 Server -l c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf 2011-01-11 13:17:45.52 Server Error: 17113, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server Error 3(The system cannot find the path specified.) occurred while opening file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 4 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:08:21.34 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 12:47:20.85 spid5s SQL Trace ID 1 was started by login "sa". 2011-01-11 12:47:20.90 spid5s Starting up database 'mssqlsystemresource'. 2011-01-11 12:47:20.93 spid5s The resource database build version is 9.00.3042. This is an informational message only. No user action is required. 2011-01-11 12:47:21.21 spid5s Error: 15466, Severity: 16, State: 1. 2011-01-11 12:47:21.21 spid5s An error occurred during decryption. 2011-01-11 12:47:21.38 spid8s Starting up database 'model'. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x90. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x1. 2011-01-11 12:47:21.38 Server Error: 17826, Severity: 18, State: 3. 2011-01-11 12:47:21.38 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2011-01-11 12:47:21.38 Server Error: 17120, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. One lead I had was to change the SQL logon account from "Network Service" to "Local System". Unfortunately, that is resulting in the error message The Security ID Structure is Invalid [0x80070539] Any help either uninstalling or getting SQLExpress running would be fantastic.

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • synchronization of file locations between two machines

    - by intuited
    Although similar threads have been asked on this site and its siblings before, I've not managed to glean the answer to this persistent question. Any help is much appreciated. The situation: I've got two laptops; both contain a ton of music. Sometimes I move these music files to different locations, or change the metadata in them, or convert them to a different format. I might do any of these things on either machine. I rarely do all of them at once — ie it's unlikely that I'll convert a file's format and move it to a different location all in one go. I'd like to be able to synchronize these changes without having to sift through everything that was renamed or moved. I'm familiar with rsync but I find it inadequate, because although it can compute checksums, it doesn't have any way to store them. So if a file differs, it can't figure out which side it changed on. This also means that it can't attempt to match a missing file to a new one with the same checksum (ie a move) if the filesize and date are the same, it , so it takes an epoch to do a sync on a large repository. I would like to only check the checksum if the files even if you turn on checksumming, it still doesn't use it intelligently: ie it checksums files even if the sizes differ. IIRC. it's not able to use file metadata as a means of file comparison. this is sort of a wishlist item but it seems doable. I've also looked into rsnapshot, but its requirement to create a full backup is impractical in this situation. I don't need a backup, I just need a record of what file with each hash was where when. Unison seems like it might be able to do something vaguely along these lines, but I'm loathe to spend hours wading through its details only to discover that it's sadly lacking. Plus, it's fun asking questions on here. What I'd like is a tool that does something along these lines: keeps track of file checksums or of actual renames, possibly using inotify to greatly reduce resource consumption/latency stores a database containing this info, along with other pertinencies like the file format and metadata, the actual inode, the filename history, etc. uses this info to provide more-intelligent synchronization with a counterpart on the other side. So for example: if a file has been converted from flac to ogg, but kept the same base filename, or the same metadata, it should be able to send the new version over, and the other side should delete the original. Probably it should actually sequester it somewhere in case they or you screwed up, but that's a detail. And then when the transaction is done, the state is logged so that the next time the two interact they can work out their differences. Maybe all this metadata stuff is a fancy pipe dream. I would actually be pretty happy if there was something out there that could just use checksums in an intelligent way. This would be sort of like having the intelligence of something like git, minus the need to duplicate data in an index/backup/etc (and branching, and checkouts, and all the other great stuff that RCSs do. basically just fast forward commit pushes are all I want, with maybe the option to roll back.) So is there something out there that can do this? If not, can someone suggest a good way to start making it?

    Read the article

  • All client browsers repeatedly asking for NTLM authentication when running through local proxy server

    - by Marko
    All client browsers repeatedly asking for NTLM authentication when running through local proxy server. When pointing browsers through the local proxy to the internet, some but not all clients are being repeatedley prompted to authenticate to the proxy server. I have inspected the headers using firefox live headers as well as fiddler, and in all cases the authentication prompts happen when requesting SSL resources. an example of this would be as follows: GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com Proxy-Authorization: NTLM TlRMTVNTUAABAAAAB7IIogkACQAvAAAABwAHACgAAAAFASgKAAAAD1dJTlhQMUdGTEFHU0hJUDc= GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Proxy-Authorization: NTLM TlRMTVNTUAADA (more stuff goes here I cut it short) Host: gmail.google.com At this point the username and password prompt has appeared in the browser, it does not matter what is typed into this box, correct credentials, random nonsense the browser does not accept anything in this box it will continue to popup. If I press cancel, I sometimes get a http 407 error, but on other occasions I click cancel the website proceeds to download and show normally. This is repeatable with some clients running through my proxy server, but in other cases it does not happen at all. In the cases where a client computer works normally, the only difference I can see is that the 3rd request for SSL resource comes back with a 200 response, see below: CONNECT gmail.google.com:443 HTTP/1.0 User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; MALC) Proxy-Connection: Keep-Alive Content-Length: 0 Host: gmail.google.com Pragma: no-cache Proxy-Authorization: NTLM TlRMTVNTUAADAAAAGAAYAIAAAA A SSLv3-compatible ClientHello handshake was found. I have tried resetting user accounts as well as computer accounts in Active Directory. User accounts and passwords that are being used are correct and the passwords have been reset so they are not out of sync. I have removed the clients and even the proxy server from the domain, and rejoined them. I have installed a complete separate proxy server and get exactly the same problem when I point clients to a different proxy server on a different IP address.

    Read the article

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >