Search Results

Search found 16061 results on 643 pages for 'array indexing'.

Page 443/643 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • RAID 0 Volatile Volume Cache Mode configuration

    - by SnippetSpace
    I discovered that in IRST there is an option to set a cache mode for my 3 ssd raid 0 array. I've read the documentation by Intel and have some questions: Are there any overall benefits/risks from enabling cache mode? As I'm on a laptop, would write back be recommended? I read it increases chance of data loss on power interruption. What is the difference between how windows handles data integrity and the intel driver? Read only mode seems to have the benefit of faster reads, does it have any downsides? Thanks for your help guys!

    Read the article

  • PERC H710 mini raid controller advanced settings (BIOS)

    - by gregg
    I upgraded from a PERC h310 to an H710 controller on my Dell R620 but didnt get any increase in performance. This is a ESXi host with a 5 disk RAID 5. I noticed when going to the RAID BIOS that the advanced settings section was not activated/checked off. In that section is the strip element size: 64kb (default) read policy: no read ahead and the write policy: write-through. Will checking that section do any harm to the existing raid array or will it simply enable those policies and hopefully boost performance? Or, lastly, is it already using those policies and the checkmark is simply to activate them for changes

    Read the article

  • Random Windows application crashes on Windows Server Hyper-V Core 2012

    - by Marlamin
    We're having some issues with our Hyper-V Core 2012 R2 installation on a HP DL360G8. We have an identical server with Hyper-V Core 2012 (not R2) that does not have these issues. When logging off from the physical server/via remote desktop, we sometimes get this error: Configure-SMRemoting.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. We've also once or twice seen a "memory could not be read" error mentioning LoginUI.exe (another Windows app in System32) but have been unable to get an exact description. It's rather worrying to get such errors on a fresh install of Hyper-V 2012 R2. Is this even anything to worry about? Things we've done: Memtest86+, no memory errors Checksummed the file that is crashing with the one in the verified correct ISO, files match Server firmware upgrade to latest firmware of all present hardware, no visible changes Remade the RAID5 array , no change Reinstalled a few times, no change Reinstall without applying Windows updates after, no change

    Read the article

  • NSClient++: external script with optional arguments

    - by syneticon-dj
    I am trying to define an external script which would take optional arguments in NSClient++ 0.4.1 on Windows. Following the nsclient-full.ini example code I have defined mycheck=cmd /C echo C:\mydir\myscript.ps1 %ARGS% | powershell.exe -command - which simply yields the string %ARGS% passed as the only argument to myscript.ps1, no matter what I specify in my call through NRPE (using Nagios' check_nrpe if that matters). I then tried to rewrite the definition to mycheck=cmd /C echo C:\mydir\myscript.ps1 $ARG1$ $ARG2$ | powershell.exe -command - (myscript.ps1 would take up to two arguments), which does help a bit. At least, if two arguments are provided, I can fetch them via the args[] array. The trouble starts when the call has less than two arguments - in this case the literal strings $ARG2 and $ARG1$ are passed through as arguments. Handling this case in the code of myscript.ps1 makes the whole argument processing routine ugly at best. Is there a sane way of defining optional parameters to an external script which would not pass NSClient's variable names if no parameter has been specified?

    Read the article

  • Access violation in DirectX OMSetRenderTargets

    - by IDWMaster
    I receive the following error (Unhandled exception at 0x527DAE81 (d3d11_1sdklayers.dll) in Lesson2.Triangles.exe: 0xC0000005: Access violation reading location 0x00000000) when running the Triangle sample application for DirectX 11 in D3D_FEATURE_LEVEL_9_1. This error occurs at the OMSetRenderTargets function, as shown below, and does not happen if I remove that function from the program (but then, the screen is blue, and does not render the triangle) //// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF //// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO //// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A //// PARTICULAR PURPOSE. //// //// Copyright (c) Microsoft Corporation. All rights reserved #include #include #include "DirectXSample.h" #include "BasicMath.h" #include "BasicReaderWriter.h" using namespace Microsoft::WRL; using namespace Windows::UI::Core; using namespace Windows::Foundation; using namespace Windows::ApplicationModel::Core; using namespace Windows::ApplicationModel::Infrastructure; // This class defines the application as a whole. ref class Direct3DTutorialViewProvider : public IViewProvider { private: CoreWindow^ m_window; ComPtr m_swapChain; ComPtr m_d3dDevice; ComPtr m_d3dDeviceContext; ComPtr m_renderTargetView; public: // This method is called on application launch. void Initialize( _In_ CoreWindow^ window, _In_ CoreApplicationView^ applicationView ) { m_window = window; } // This method is called after Initialize. void Load(_In_ Platform::String^ entryPoint) { } // This method is called after Load. void Run() { // First, create the Direct3D device. // This flag is required in order to enable compatibility with Direct2D. UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; #if defined(_DEBUG) // If the project is in a debug build, enable debugging via SDK Layers with this flag. creationFlags |= D3D11_CREATE_DEVICE_DEBUG; #endif // This array defines the ordering of feature levels that D3D should attempt to create. D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_1 }; ComPtr d3dDevice; ComPtr d3dDeviceContext; DX::ThrowIfFailed( D3D11CreateDevice( nullptr, // specify nullptr to use the default adapter D3D_DRIVER_TYPE_HARDWARE, nullptr, // leave as nullptr if hardware is used creationFlags, // optionally set debug and Direct2D compatibility flags featureLevels, ARRAYSIZE(featureLevels), D3D11_SDK_VERSION, // always set this to D3D11_SDK_VERSION &d3dDevice, nullptr, &d3dDeviceContext ) ); // Retrieve the Direct3D 11.1 interfaces. DX::ThrowIfFailed( d3dDevice.As(&m_d3dDevice) ); DX::ThrowIfFailed( d3dDeviceContext.As(&m_d3dDeviceContext) ); // After the D3D device is created, create additional application resources. CreateWindowSizeDependentResources(); // Create a Basic Reader-Writer class to load data from disk. This class is examined // in the Resource Loading sample. BasicReaderWriter^ reader = ref new BasicReaderWriter(); // Load the raw vertex shader bytecode from disk and create a vertex shader with it. auto vertexShaderBytecode = reader-ReadData("SimpleVertexShader.cso"); ComPtr vertexShader; DX::ThrowIfFailed( m_d3dDevice-CreateVertexShader( vertexShaderBytecode-Data, vertexShaderBytecode-Length, nullptr, &vertexShader ) ); // Create an input layout that matches the layout defined in the vertex shader code. // For this lesson, this is simply a float2 vector defining the vertex position. const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; ComPtr inputLayout; DX::ThrowIfFailed( m_d3dDevice-CreateInputLayout( basicVertexLayoutDesc, ARRAYSIZE(basicVertexLayoutDesc), vertexShaderBytecode-Data, vertexShaderBytecode-Length, &inputLayout ) ); // Load the raw pixel shader bytecode from disk and create a pixel shader with it. auto pixelShaderBytecode = reader-ReadData("SimplePixelShader.cso"); ComPtr pixelShader; DX::ThrowIfFailed( m_d3dDevice-CreatePixelShader( pixelShaderBytecode-Data, pixelShaderBytecode-Length, nullptr, &pixelShader ) ); // Create vertex and index buffers that define a simple triangle. float3 triangleVertices[] = { float3(-0.5f, -0.5f,13.5f), float3( 0.0f, 0.5f,0), float3( 0.5f, -0.5f,0), }; D3D11_BUFFER_DESC vertexBufferDesc = {0}; vertexBufferDesc.ByteWidth = sizeof(float3) * ARRAYSIZE(triangleVertices); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; vertexBufferData.pSysMem = triangleVertices; vertexBufferData.SysMemPitch = 0; vertexBufferData.SysMemSlicePitch = 0; ComPtr vertexBuffer; DX::ThrowIfFailed( m_d3dDevice-CreateBuffer( &vertexBufferDesc, &vertexBufferData, &vertexBuffer ) ); // Once all D3D resources are created, configure the application window. // Allow the application to respond when the window size changes. m_window-SizeChanged += ref new TypedEventHandler( this, &Direct3DTutorialViewProvider::OnWindowSizeChanged ); // Specify the cursor type as the standard arrow cursor. m_window-PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0); // Activate the application window, making it visible and enabling it to receive events. m_window-Activate(); // Enter the render loop. Note that tailored applications should never exit. while (true) { // Process events incoming to the window. m_window-Dispatcher-ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent); // Specify the render target we created as the output target. ID3D11RenderTargetView* targets[1] = {m_renderTargetView.Get()}; m_d3dDeviceContext-OMSetRenderTargets( 1, targets, NULL // use no depth stencil ); // Clear the render target to a solid color. const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f }; //Code fails here m_d3dDeviceContext-ClearRenderTargetView( m_renderTargetView.Get(), clearColor ); m_d3dDeviceContext-IASetInputLayout(inputLayout.Get()); // Set the vertex and index buffers, and specify the way they define geometry. UINT stride = sizeof(float3); UINT offset = 0; m_d3dDeviceContext-IASetVertexBuffers( 0, 1, vertexBuffer.GetAddressOf(), &stride, &offset ); m_d3dDeviceContext-IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Set the vertex and pixel shader stage state. m_d3dDeviceContext-VSSetShader( vertexShader.Get(), nullptr, 0 ); m_d3dDeviceContext-PSSetShader( pixelShader.Get(), nullptr, 0 ); // Draw the cube. m_d3dDeviceContext-Draw(3,0); // Present the rendered image to the window. Because the maximum frame latency is set to 1, // the render loop will generally be throttled to the screen refresh rate, typically around // 60Hz, by sleeping the application on Present until the screen is refreshed. DX::ThrowIfFailed( m_swapChain-Present(1, 0) ); } } // This method is called before the application exits. void Uninitialize() { } private: // This method is called whenever the application window size changes. void OnWindowSizeChanged( _In_ CoreWindow^ sender, _In_ WindowSizeChangedEventArgs^ args ) { m_renderTargetView = nullptr; CreateWindowSizeDependentResources(); } // This method creates all application resources that depend on // the application window size. It is called at app initialization, // and whenever the application window size changes. void CreateWindowSizeDependentResources() { if (m_swapChain != nullptr) { // If the swap chain already exists, resize it. DX::ThrowIfFailed( m_swapChain-ResizeBuffers( 2, 0, 0, DXGI_FORMAT_R8G8B8A8_UNORM, 0 ) ); } else { // If the swap chain does not exist, create it. DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0}; swapChainDesc.Stereo = false; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.Scaling = DXGI_SCALING_NONE; swapChainDesc.Flags = 0; // Use automatic sizing. swapChainDesc.Width = 0; swapChainDesc.Height = 0; // This is the most common swap chain format. swapChainDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // Don't use multi-sampling. swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; // Use two buffers to enable flip effect. swapChainDesc.BufferCount = 2; // We recommend using this swap effect for all applications. swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // Once the swap chain description is configured, it must be // created on the same adapter as the existing D3D Device. // First, retrieve the underlying DXGI Device from the D3D Device. ComPtr dxgiDevice; DX::ThrowIfFailed( m_d3dDevice.As(&dxgiDevice) ); // Ensure that DXGI does not queue more than one frame at a time. This both reduces // latency and ensures that the application will only render after each VSync, minimizing // power consumption. DX::ThrowIfFailed( dxgiDevice-SetMaximumFrameLatency(1) ); // Next, get the parent factory from the DXGI Device. ComPtr dxgiAdapter; DX::ThrowIfFailed( dxgiDevice-GetAdapter(&dxgiAdapter) ); ComPtr dxgiFactory; DX::ThrowIfFailed( dxgiAdapter-GetParent( __uuidof(IDXGIFactory2), &dxgiFactory ) ); // Finally, create the swap chain. DX::ThrowIfFailed( dxgiFactory-CreateSwapChainForImmersiveWindow( m_d3dDevice.Get(), DX::GetIUnknown(m_window), &swapChainDesc, nullptr, // allow on all displays &m_swapChain ) ); } // Once the swap chain is created, create a render target view. This will // allow Direct3D to render graphics to the window. ComPtr backBuffer; DX::ThrowIfFailed( m_swapChain-GetBuffer( 0, __uuidof(ID3D11Texture2D), &backBuffer ) ); DX::ThrowIfFailed( m_d3dDevice-CreateRenderTargetView( backBuffer.Get(), nullptr, &m_renderTargetView ) ); // After the render target view is created, specify that the viewport, // which describes what portion of the window to draw to, should cover // the entire window. D3D11_TEXTURE2D_DESC backBufferDesc = {0}; backBuffer-GetDesc(&backBufferDesc); D3D11_VIEWPORT viewport; viewport.TopLeftX = 0.0f; viewport.TopLeftY = 0.0f; viewport.Width = static_cast(backBufferDesc.Width); viewport.Height = static_cast(backBufferDesc.Height); viewport.MinDepth = D3D11_MIN_DEPTH; viewport.MaxDepth = D3D11_MAX_DEPTH; m_d3dDeviceContext-RSSetViewports(1, &viewport); } }; // This class defines how to create the custom View Provider defined above. ref class Direct3DTutorialViewProviderFactory : IViewProviderFactory { public: IViewProvider^ CreateViewProvider() { return ref new Direct3DTutorialViewProvider(); } }; [Platform::MTAThread] int main(array^) { auto viewProviderFactory = ref new Direct3DTutorialViewProviderFactory(); Windows::ApplicationModel::Core::CoreApplication::Run(viewProviderFactory); return 0; }

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • How to remap "Dashboard" key to show the Desktop on OSX [Snow] Leopard?

    - by Mike
    I use my Desktop far more often than I use my Dashboard. However, my MacBook Pro comes with a dedicated key for Dashboard but it doesn't come with one for Desktop. Using this article, I was able to remap my Dashboard key to show the desktop by changing the values for keys 62 and 63 ("Dashboard") to the same values used by keys 36 and 37 ("Show Desktop"). Specifically, I changed the value for both array index #1s to 111. This worked great for my external (kinesis freestyle) keyboard. But when I went back to my internal macbook keyboard, I discovered that the Dashboard key still mapped to the Dashboard rather than the Desktop. How can I complete this mapping for all of my keyboards? The Kinesis Freestyle, my internal MacBook Pro keyboard, and my external Apple Aluminum Bluetooth keyboard? Update: I'm definitely not looking for a solution that involves using the Function keys instead of the special keys. I wish to keep using my Function keys as function keys as they're indispensable for other applications.

    Read the article

  • HP SmartArray P400 has slow read and write speed

    - by Tadas D.
    I have desktop in which I installed HP SmartArray P400 controller with two HP DF0146B8052 hard drives. I made RAID0 logical volume from them, but I am getting 20MB/s write speed and ~140-120MB/s read speed. Also there is quite low scatter for benchmark results (I am getting quite nice line) and it looks like controller is "capping" my speeds. I tried reseting controllers configuration and I haven't found any settings in HP ACU (Array Configuration Utility) to help me. I am using Windows 7 Ultimate and M4A78 board Does anyone have ideas what could be wrong? Also I am attaching diagnostic results.

    Read the article

  • AutoVue Integrates with Primavera P6

    - by celine.beck
    Oracle's Primavera P6 Enterprise Project Portfolio Management is an integrated project portfolio management (PPM) application that helps select the right strategic mix of projects, balance resource capacity, manage project risk and complete projects on time and within budget. AutoVue 19.3 and later versions (release 20.0) now integrate out of the box with the Web version of Oracle Primavera P6 release 7. The integration between the two products, which was announced during Oracle Open World 2009, provides project teams with ready access to any project documents directly from within the context of P6 in support for project scope definition and project planning and execution. You can learn more about the integration between AutoVue and Primavera P6 by: Listening to the Oracle Appcast entitled Enhance Primavera Project Document Collaboration with AutoVue Enterprise Visualization Watching an Oracle Webcast about how to improve project success with document visualization and collaboration Watching a recorded demo of the integrated solution Teams involved in complex projects like construction or plant shutdown activities are highly interdependent: the decisions of one affecting the actions of many others. This coupled with increasing project complexity, a vast array of players and heavy engineering and document-intensive workflows makes it more challenging to complete jobs on time and within budget. Organizations need complete visibility into project information, as well as robust project planning, risk analysis and resource balancing capabilities similar to those featured in Primavera P6 ; they also need to make sure that all project stakeholders, even those who neither understand engineering drawings nor are interested in engineering details that go beyond their specific needs, have ready access to technically advanced project information. This is exactly what the integration between AutoVue and Primavera delivers: ready access to any project information attached to Primavera projects, tasks or activities via AutoVue. There is no need for users to waste time searching for project-related documents or disrupting engineers for printouts, users have all the context they need to make sound decisions right from within Primavera P6 with a single click of a button. We are very excited about this new integration. If you are using Primavera and / or Primavera tied with AutoVue, we would be interested in getting your feedback on this integration! Please do not hesitate to post your comments / reactions on the blog!

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Flixel - Animated Tilemaps

    - by nospoone
    I am using Flixel 2.55 and I am trying to animate a tilemap. I found this piece of code that apparently enables the use of sprites as tiles. From what I understand, this loops over the tilemap's graphic and replaces the tile's pixels with the sprite's pixels each time they change. I have implemented the class and it's working, but not completely; the tiles get replaced, but do not animate unless the camera moves. Here's the relevant parts from LevelLoader.as, which only instantiates the AnimatedTilemaps (piece of code from forum) and pushes sprites to the array. // AnimatedTile is just an extended FlxSprite private var _waterTop1:AnimatedTile; // Create ground tilemap _groundTilemap = new AnimatedTilemap(); _groundTilemap.loadMap(_rawXML.Ground, Assets.OverworldGround, 8, 8); FlxG.state.add(_groundTilemap); _waterTop1 = new AnimatedTile(8, 8, Assets.WaterTop, 100); // .Animate only adds and plays an animation, with a startAtFrame param. _waterTop1.Animate('run', [0...47], 10, true, 0); Now, it seems as though the sprites are updating. I tried tracing the update()s, and they are running for both the sprites and the tilemap. The sprites are even changing frames. Using only AnimatedTiles and hard placing them (giving a x and y) works and animates. What troubles me is that they only update when the camera moves. I've been on this for a week now and can't seem to put my finger on what's wrong. I am also open to other solutions to have animates tiles in a tilemap. If other details are needed, just ask. PS: Sorry for my english, I am not a native speaker...

    Read the article

  • SQLAuthority News – Happy Deepavali and Happy News Year

    - by pinaldave
    Diwali or Deepavali is popularly known as the festival of lights. It literally means “array of light” or “row of lamps“. Today we build a small clay maps and fill it with oil and light it up. The significance of lighting the lamp is the triumph of good over evil. I work every single day in a year but today I am spending my time with family and little one. I make sure that my daughter is aware of our culture and she learns to celebrate the festival with the same passion and values which I have. Every year on this day, I do not write a long blog post but rather write a small post with various SQL Tips and Tricks. After reading them you should quickly get back to your friends and family – it is the most important festival day. Here are a few tips and tricks: Take regular full backup of your database Avoid cursors if they can be replaced by set based process Keep your index maintenance script handy and execute them at intervals Consider Solid State Drive (SDD) for crucial database and tempdb placement Update statistics for OLTP transactions at intervals I guess that’s it for today. If you still have more time to learn. Here are few things you should consider. Get FREE Books by Sign up for tomorrow’s webcast by Rick Morelan Watch SQL in Sixty Seconds Series – FREE SQL Learning Read my earlier 2300+ articles Well, I am sure that will keep you busy for the rest of the day! Happy Diwali to All of You! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Stream Media and Live TV Across the Internet with Orb

    - by DigitalGeekery
    Looking for a way to stream your media collection across the Internet? Or perhaps watch and record TV remotely? Today we are going to look at how to do all that and more with Orb. Requirements Windows XP / Vista / 7 or Intel based Mac w/ OS X 10.5 or later. 1 GB RAM or more Pentium 4 2.4 GHz or higher / AMD Athlon 3200+ Broadband connections TV Tuner for streaming and recording live TV (optional) Note: Slower internet connections may result in stuttering during playback. Installation and Setup Download and install Orb on your home computer. (Download link below) You’ll want to take the defaults for the initial portion of the install. When we get to the Orb Account setup portion of the install is when we will have to enter information and make some decisions. Choose your language and click Next. We’ll need to create and user account and password. A valid email address is required as we’ll need to confirm the account later. Click Next.   Now you’ll want to choose your media sources. Orb will automatically look for folders that may contain media files. You can add or remove folders click on the (+) or (-) buttons. To remove a folder, click on it once to select it from the list and then click the minus (-) button. To add a folder, click the plus (+) button and browse for the folder. You can add local folders as well as shared folders from networked computers and USB attached storage. Note: Both the host computer running Orb and the networked computer will need to be running to access shared network folders remotely. When you’ve selected all your media files, click Next. Orb will proceed to index your media files… When the indexing is complete, click Next. Orb TV Setup Note: Streaming Live TV to Macs is not currently supported. If you have a TV tuner card connected to your PC, you can opt to configure Orb to stream live or recorded TV. Click Next  to configure TV. Or, choose Skip if you don’t wish to configure Orb for TV.   If you have a Digital tuner card, type in your Zip Code and click Get List to pull your channel listings. Select a TV provider from the list and click Next. If not, click Skip.   You can select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Next choose an analog provider, if necessary, and click Next.   Select “Yes” or “No” for a set top box and click Next. Just as we did with the Digital tuner, select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Now we’re finished with the setup. Click Close. Accessing your Media Remotely Media files are accessed through a web-based interface. Before we go any further, however, we’ll need to confirm our username and password. Check your inbox for an email from Orb Networks. Click the enclosed confirmation link. You’ll be prompted to enter the username and password you selected in your browser then click Next.   Your account will be confirmed. Now, we’re ready to enjoy our media remotely. To get started, point your browser to the MyCast website from your remote computer. (See link below) Enter your credentials and click Log In. Once logged in, you’ll be presented with the MyCast Home screen. By default you’ll see a handful of “channels” such as a TV program guide, random audio and photos, video favorites, and weather. You can add, remove, or customize channels. To add additional channels, click on Add Channels at the top right…   …and select from the dropdown list. To access your full media libraries, click Open Application at the top left and select from one of the options. Live and Recorded TV If you have a TV tuner card you configured for Orb, you’ll see your program guide on the TV / Webcams screen. To watch or record a show, click on the program listing to bring up a detail box. Then click the red button to record, or the green button to play. When recording a show, you’ll see a pulsating red icon at the top right of the listing in the program guide. If you want to watch Live TV, you may be prompted to choose your media player, depending on your browser and settings. Playback should begin shortly.   Note for Windows Media Center Users If you try to stream live TV in Orb while Windows Media Center is running on your PC, you’ll get an error message. Click the Stop MediaCenter button and then try again.   Audio On the Audio screen, you’ll find your music files indexed by genre, artist, and album. You can play a selection by clicking once and then clicking the green play button, or by simply double-clicking.   Playback will begin in the default media player for the streaming format.   Video Video works essentially the same as audio. Click on a selection and press the green play button, or double-click on the video title. Video playback will begin in the default media player for the streaming format.   Streaming Formats You can change the default streaming format in the control panel settings. To access the Control Panel, click on Open Applications  and select Control Panel. You can also click Settings at the top right.   Select General from the drop down list and then click on the Streaming Formats tab. You are provided four options. Flash, Windows Media, .SDP, and .PLS.   Creating Playlists To create playlists, drag and drop your media title to the playlist work area on the right, or click Add to playlist on the top menu. Click Save when finished.    Sharing your Media Orb allows you to share media playlists across the Internet with friends and family. There are a few ways to accomplish this. We’ll start by click the Share button at the bottom of the playlist work area after you’ve compiled your playlist. You’ll be prompted to choose a method by which to share your playlist. You’ll have the option to share your playlist publicly or privately. You can share publically through links, blogs, or on your Orb public profile.  By choosing the Public Profile option, Orb will automatically create a profile page for you with a URL like http://public.orb.com/username that anyone can easily access on the Internet. The private sharing option allows you to invite friends by email and requires recipients to register with Orb. You can also give your playlist a custom name, or accept the auto-generated title. Click OK when finished. Users who visit your public profile will be able to view and stream any of your shared playlists to their computer or supported device.   Portable Media Devices and Smartphones Orb can stream media to many portable devices and 3G phones. Streaming audio is supported on the iPhone and iPod Touch through the Safari browser. However, video and live TV streaming requires the Orb Live iPhone App.  Orb Live is available in the App store for $9.99. To stream media to your portable device, go to the MyCast website in your mobile browser and login. Browse for your media or playlist. Make a selection and play the media. Playback will begin. We found streaming music to both the Droid and the iPhone to work quite nicely. Video playback on the Droid, however, left a bit to be desired. The video looked good, but the audio tended to be out of sync. System Tray Control Panel By default Orb runs in the system tray on start up. To access the System Tray Control Panel, right-click on the Orb icon in the system tray and select Control Panel. Login with your Orb username and  password and click OK.   From here you can add or remove media sources, add manage accounts, change your password, and more. If you’d rather not run Orb on Startup, click the General icon.   Unselect the checkbox next to Start Orb when the system starts. Conclusion It may seem like a lot of steps, but getting Orb up and running isn’t terribly difficult. Orb is available for both Windows and Intel based Macs. It also supports streaming to many Game Consoles such as the Wii, PS3, and XBox 360. If you are running Windows 7 on multiple computers, you may want to check out our write-up on how to stream music and video over the Internet with Windows Media Player 12. Downloads Download Orb Logon to MyCast Similar Articles Productive Geek Tips Stream Music and Video Over the Internet with Windows Media Player 12Enable Media Streaming in Windows Home Server to Windows Media PlayerStream Media from Windows 7 to XP with VLC Media PlayerShare Digital Media With Other Computers on a Home Network with Windows 7Automatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • CodePlex Daily Summary for Wednesday, January 12, 2011

    CodePlex Daily Summary for Wednesday, January 12, 2011Popular ReleasesGoogle URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmljGestures: a jQuery plugin for gesture events: 0.81: added event substitution for IE updated index.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcUmbraco CMS: Umbraco 4.6: The Umbraco 4.6 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login timeout ...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...Hawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch...New ProjectsASP.NET MVC Scaffolding: Scaffolding package for ASP.NETAstor: OData Explorer: OData ExplorerBasic Users Community: A simple user community with threads and posts.Bukkit Server Manager: BSM makes server managing easy we have multiple type and database support including: MySql, SQLite types: VPS, Dedicated, Home PCCh4CP: Chamber 4 control programDotNetNuke Telerik Library: A set of Telerik wrappers for DotNetNuke module developers to utilize which aren't yet included as of 5.6.1. Eventually this will be offloaded to the core. Enjoy Life: our fypFolderSizeChecker: It suppose to check the size of big folders in specific partition and help user to find the most disk usage location. (It's simple project so please don't expect big and complex algorithms)HomeTeamOnline: This is project of HomeTeamOnlineICSWorld: This is project of ICSWorldIMAP Client for .NET 4.0 using LumiSoft: Develop an IMAP client using this sample project based on the LumiSoft .NET open source project. This project compiles in .NET 4.0 and demonstrates how to pull email using IMAP. The purpose of the project is for email auto processing.MUIExt (Multilingual User Interface Extender): MUIExt makes it easier for SharePoint 2010 users to create multilingual sites. You'll no longer have to live with the MUI limitations or have to manage variations. It's developed in csharp.Phoenix Service Bus: The goal of this pServiceBus is to provide an API and Service Components that would make implementing an ESB Infrastructure in your environment. It's developed in C#, and also have API written for Javascript Clients PhotoSnapper: Home project just to rename photos or .mov files in a folder starting from from a user defined number.redditfier: A windows application to notify redditors with new posts.SharePoint Field Updater: Automatically update sub fields according to a lookup field. For example: Updating field "Contact" will automatically put "Contact Email" and "Address" in the appropriate text fields.TXLCMS: emptyUmbraco Spark engine: Spark macro engine for UmbracoUrdu Translation: Urdu Translation Project WFTestDesign: BizUnit WF is based on BizUnit solution that allows user to define a test using WorkFlow UI, custom activities designed in this extension and general Workflow activities.It's enable also to use breakpoint in test. It's developed in C#.WPF Date Range Slider: A WPF Date Range Slider user control written with C# to allow your users to choose a range of dates using a double thumbed slider control.WPMind Framework for WP7: This project is used to provide some Windows Phone 7 controls for Windows Phone 7 Silverlight developer. Please join us if you are interested in this project.

    Read the article

  • 2D grid with multiple types of objects

    - by Alexandre P. Levasseur
    This is my first post here in programmers.stackexchange (I'm a regular on SO). I hope this isn't too general. I'm trying a simple project to learn Java from something I've seen done in the past. Basically, it's an AI simulation where there are herbivorous and carnivorous creatures and both must try to survive. The part I am trying to come up with is that of the board itself. Let's assume very simple rules. The board must be of size X by Y and only one element can be in one place at one time. For example, a critter cannot be in the same tile as a food block. There can be obstacles (rocks, trees..), there can be food, there can be critters of any type. Assuming these rules, what would be one good way to represent this situation ? This is what I came up with and want suggestions if possible: Use multiple levels of inheritance to represent all the different possible objects (AbstractObject - (NonMovingObject - (Food, Obstacle) , MovingObject - Critter - (Carnivorous, Herbivorous))) and use polymorphism in a 2D array to store the instances and still have access to lower level methods. Many thanks. Edit: Here is the graphic representation of the structure I have in mind.

    Read the article

  • Hardware firewall vs VMWare firewall appliance

    - by Luke
    We have a debate in our office going on whether it's necessary to get a hardware firewall or set up a virtual one on our VMWare cluster. Our environment consists of 3 server nodes (16 cores w/ 64 GB RAM each) over 2x 1 GB switches w/ an iSCSI shared storage array. Assuming that we would be dedicating resources to the VMWare appliances, would we have any benefit of choosing a hardware firewall over a virtual one? If we choose to use a hardware firewall, how would a dedicated server firewall w/ something like ClearOS compare to a Cisco firewall?

    Read the article

  • Reconfigure RAID on Dell PowerEdge T710

    - by Stefano Borini
    I have a Dell PowerEdge T710 under my feet at this very moment, with RedHat Enterprise Server 5.3. I have 6 1TB disks and two 500GB. parted reports two devices, one 500 GB and the other 4 TB. So I assume the RAID has been setup as mirror for two disks, and I assume as RAID 5 the remaining ones. I say "I assume" because it does not make sense. Having 6 disks in RAID 5, I should obtain a total space of 5 TB, not 4TB. It's not even RAID 10: I would end up with a 3 TB unit. How can I check and eventually modify the RAID array definition? In the Fujitsu Siemens I played with some time ago, at boot I had the chance to enter the controller BIOS, but here I don't see a clear way to perform this operation.

    Read the article

  • Verbosity Isn’t Always a Bad Thing

    - by PSteele
    There was a message posted to the Rhino.Mocks forums yesterday about verifying a single parameter of a method that accepted 5 parameters.  The code looked like this:   [TestMethod] public void ShouldCallTheAvanceServiceWithTheAValidGuid() { _sut.Send(_sampleInput); _avanceInterface.AssertWasCalled(x => x.SendData( Arg<Guid>.Is.Equal(Guid.Empty), Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<string>.Is.Anything)); } Not the prettiest code, but it does work. I was going to reply that he could use the “GetArgumentsForCallsMadeOn” method to pull out an array that would contain all of the arguments.  A quick check of “args[0]” would be all that he needed.  But then Tim Barcz replied with the following: Just to help allay your fears a bit...this verbosity isn't always a bad thing.  When I read the code, based on the syntax you have used I know that for this particular test no parameters matter except the first...extremely useful in my opinion. An excellent point!  We need to make sure our unit tests are as clear as our code. Technorati Tags: Rhino.Mocks,Unit Testing

    Read the article

  • Hard drive device names are different from one reboot to another in Ubuntu.

    - by Mike
    I have an Ubuntu machine (10.04 but had the same problem in 8.04) with a bunch of drives that I use as a file server. 1 SATA that I boot off of. 2 IDE that in RAID1 and 2 SATA in RAID1. The problem is the drives that I have in RAID1 change device names on reboot. This is a problem because the in my mdadm.conf a reference to /dev/sda1, for example, might not work the next time I reboot because /dev/sda1 could be a disk from another array. Any help getting around this would be appreciated. -Mike

    Read the article

  • Hard drive device names are different from one reboot to another in Ubuntu.

    - by Mike
    I have an Ubuntu machine (10.04 but had the same problem in 8.04) with a bunch of drives that I use as a file server. 1 SATA that I boot off of. 2 IDE that in RAID1 and 2 SATA in RAID1. The problem is the drives that I have in RAID1 change device names on reboot. This is a problem because the in my mdadm.conf a reference to /dev/sda1, for example, might not work the next time I reboot because /dev/sda1 could be a disk from another array. Any help getting around this would be appreciated. -Mike

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • My Favorite New Features in Visual Studio 2010

    On Tuesday, April 13th, Microsoft released Visual Studio 2010 and the .NET Framework 4.0 (which includes ASP.NET 4.0). To get started with Visual Studio 2010 you can either download a trial version of one of the commercial editions or you can go grab the free Visual Web Developer 2010 Express Edition. The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on . This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! Read More >

    Read the article

  • My Favorite New Features in Visual Studio 2010

    On Tuesday, April 13th, Microsoft released Visual Studio 2010 and the .NET Framework 4.0 (which includes ASP.NET 4.0). To get started with Visual Studio 2010 you can either download a trial version of one of the commercial editions or you can go grab the free Visual Web Developer 2010 Express Edition. The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on . This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows 7 file sharing password protecting or making stuff available to just me

    - by Carbonara
    Even with the new Homegroup feature I'm still finding the way Windows deals with folder sharing utterly baffling. Here's what I want to do. I have two computers, a PC Desktop and a laptop. I also live in a shared flat with other computer users. I have set up a Homegroup and a Workgroup on the desktop and joined them on the laptop and in the home group I have shared video, music and pictures. This is so that anyone on the network can view pictures and listen to music etc. But I want my Documents folder from my desktop to only be available to me on my laptop and not to anyone else that may be on the network. The Homegroup only allows (from what I can gather from the baffling array of options) sharing with everyone or no one. Is it possible to only allow the laptop to access the documents folder on the desktop? The user name and password are the same on both computers.

    Read the article

  • ASP.NET MVC localization DisplayNameAttribute alternatives: a good way

    - by Brian Schroer
    The ASP.NET MVC HTML helper methods like .LabelFor and .EditorFor use model metadata to autogenerate labels for model properties. By default it uses the property name for the label text, but if that’s not appropriate, you can use a DisplayName attribute to specify the desired label text: [DisplayName("Remember me?")] public bool RememberMe { get; set; } I’m working on a multi-language web site, so the labels need to be localized. I tried pointing the DisplayName attribute to a resource string: [DisplayName(MyResource.RememberMe)] public bool RememberMe { get; set; } …but that results in the compiler error "An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type”. I got around this by creating a custom LocalizedDisplayNameAttribute class that inherits from DisplayNameAttribute: 1: public class LocalizedDisplayNameAttribute : DisplayNameAttribute 2: { 3: public LocalizedDisplayNameAttribute(string resourceKey) 4: { 5: ResourceKey = resourceKey; 6: } 7:   8: public override string DisplayName 9: { 10: get 11: { 12: string displayName = MyResource.ResourceManager.GetString(ResourceKey); 13:   14: return string.IsNullOrEmpty(displayName) 15: ? string.Format("[[{0}]]", ResourceKey) 16: : displayName; 17: } 18: } 19:   20: private string ResourceKey { get; set; } 21: } Instead of a display string, it takes a constructor argument of a resource key. The DisplayName method is overridden to get the display string from the resource file (line 12). If the key is not found, I return a formatted string containing the key (e.g. “[[RememberMe]]”) so I can tell by looking at my web pages which resource keys I haven’t defined yet (line 15). The usage of my custom attribute in the model looks like this: [LocalizedDisplayName("RememberMe")] public bool RememberMe { get; set; } That was my first attempt at localized display names, and it’s a technique that I still use in some cases, but in my next post I’ll talk about the method that I now prefer, a custom DataAnnotationsModelMetadataProvider class…

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >