Search Results

Search found 2280 results on 92 pages for 'ressource pool'.

Page 86/92 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • PASS: Election Changes for 2011

    - by Bill Graziano
    Last year after the election, the PASS Board created an Election Review Committee.  This group was charged with reviewing our election procedures and making suggestions to improve the process.  You can read about the formation of the group and review some of the intermediate work on the site – especially in the forums. I was one of the members of the group along with Joe Webb (Chair), Lori Edwards, Brian Kelley, Wendy Pastrick, Andy Warren and Allen White.  This group worked from October to April on our election process.  Along the way we: Interviewed interested parties including former NomCom members, Board candidates and anyone else that came forward. Held a session at the Summit to allow interested parties to discuss the issues Had numerous conference calls and worked through the various topics I can’t thank these people enough for the work they did.  They invested a tremendous number of hours thinking, talking and writing about our elections.  I’m proud to say I was a member of this group and thoroughly enjoyed working with everyone (even if I did finally get tired of all the calls.) The ERC delivered their recommendations to the PASS Board prior to our May Board meeting.  We reviewed those and made a few modifications.  I took their recommendations and rewrote them as procedures while incorporating those changes.  Their original recommendations as well as our final document are posted at the ERC documents page.  Please take a second and read them BEFORE we start the elections.  If you have any questions please post them in the forums on the ERC site. (My final document includes a change log at the end that I decided to leave in.  If you want to know which areas to pay special attention to that’s a good start.) Many of those recommendations were already posted in the forums or in the blogs of individual ERC members.  Hopefully nothing in the ERC document is too surprising. In this post I’m going to walk through some of the key changes and talk about what I remember from both ERC and Board discussions.  I’ll pay a little extra attention to things the Board changed from the ERC.  I’d also encourage any of the Board or ERC members to blog their thoughts on this. The Nominating Committee will continue to exist.  Personally, I was curious to see what the non-Board ERC members would think about the NomCom.  There was broad agreement that a group to vet candidates had value to the organization. The NomCom will be composed of five members.  Two will be Board members and three will be from the membership at large.  The only requirement for the three community members is that you’ve volunteered in some way (and volunteering is defined very broadly).  We expect potential at-large NomCom members to participate in a forum on the PASS site to answer questions from the other PASS members. We’re going to hold an election to determine the three community members.  It will be closer to voting for Summit sessions than voting for Board members.  That means there won’t be multiple dedicated emails.  If you’re at all paying attention it will be easy to participate.  Personally I wanted it easy for those that cared to participate but not overwhelm those that didn’t care.  I think this strikes a good balance. There’s also a clause that in order to be considered a winner in this NomCom election, you must receive 10 votes.  This is something I suggested.  I have no idea how popular the NomCom election is going to be.  I just wanted a fallback that if no one participated and some random person got in with one or two votes.  Any open slots will be filled by the NomCom chair (usually the PASS Immediate Past President).  My assumption is that they would probably take the next highest vote getters unless they were throwing flames in the forums or clearly unqualified.  As a final check, the Board still approves the final NomCom. The NomCom is going to rank candidates instead of rating them.  This has interesting implications.  This was championed by another ERC member and I’m hoping they write something about it.  This will really force the NomCom to make decisions between candidates.  You can’t just rate everyone a 3 and be done with it.  It may also make candidates appear further apart than they actually are.  I’m looking forward talking with the NomCom after this election and getting their feedback on this. The PASS Board added an option to remove a candidate with a unanimous vote of the NomCom.  This was primarily put in place to handle people that lied on their application or had a criminal background or some other unusual situation and we figured it out. We list an explicit goal of three candidate per open slot. We also wanted an easy way to find the NomCom candidate rankings from the ballot.  Hopefully this will satisfy those that want a broad candidate pool and those that want the NomCom to identify the most qualified candidates. The primary spokesperson for the NomCom is the committee chair.  After the issues around the election last year we didn’t have a good communication plan in place.  We should have and that was a failure on the part of the Board.  If there is criticism of the election this year I hope that falls squarely on the Board.  The community members of the NomCom shouldn’t be fielding complaints over the election process.  That said, the NomCom is ranking candidates and we are forcing them to rank some lower than others.  I’m sure you’ll each find someone that you think should have been ranked differently.  I also want to highlight one other change to the process that we started last year and isn’t included in these documents.  I think the candidate forums on the PASS site were tremendously helpful last year in helping people to find out more about candidates.  That gives our members a way to ask hard questions of the candidates and publicly see their answers. This year we have two important groups to fill.  The first is the NomCom.  We need three people from our membership to step up and fill this role.  It won’t be easy.  You will have to make subjective rankings of your fellow community members.  Your actions will be important in deciding who the future leaders of PASS will be.  There’s a 50/50 chance that one of the people you interview will be the President of PASS someday.  This is not a responsibility to be taken lightly. The second is the slate of candidates.  If you’ve ever thought about running for the Board this is the year.  We’ve never had nine candidates on the ballot before.  Your chance of making it through the NomCom are higher than in any previous year.  Unfortunately the more of you that run, the more of you that will lose in the election.  And hopefully that competition will mean more community involvement and better Board members for PASS. Is this the end of changes to the election process?  It isn’t.  Every year that I’ve been on the Board the election process has changed.  Some years there have been small changes and some years there have been large changes.  After this election we’ll look at how the process worked and decide what steps to take – just like we do every year.

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Pixel Shader Giving Black output

    - by Yashwinder
    I am coding in C# using Windows Forms and the SlimDX API to show the effect of a pixel shader. When I am setting the pixel shader, I am getting a black output screen but if I am not using the pixel shader then I am getting my image rendered on the screen. I have the following C# code using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using System.Runtime.InteropServices; using SlimDX.Direct3D9; using SlimDX; using SlimDX.Windows; using System.Drawing; using System.Threading; namespace WindowsFormsApplication1 { // Vertex structure. [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public float Tu; public float Tv; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } public static VertexFormat Format { get { return VertexFormat.Position | VertexFormat.Texture1; } } } static class Program { public static Device D3DDevice; // Direct3D device. public static VertexBuffer Vertices; // Vertex buffer object used to hold vertices. public static Texture Image; // Texture object to hold the image loaded from a file. public static int time; // Used for rotation caculations. public static float angle; // Angle of rottaion. public static Form1 Window =new Form1(); public static string filepath; static VertexShader vertexShader = null; static ConstantTable constantTable = null; static ImageInformation info; [STAThread] static void Main() { filepath = "C:\\Users\\Public\\Pictures\\Sample Pictures\\Garden.jpg"; info = new ImageInformation(); info = ImageInformation.FromFile(filepath); PresentParameters presentParams = new PresentParameters(); // Below are the required bare mininum, needed to initialize the D3D device. presentParams.BackBufferHeight = info.Height; // BackBufferHeight, set to the Window's height. presentParams.BackBufferWidth = info.Width+200; // BackBufferWidth, set to the Window's width. presentParams.Windowed =true; presentParams.DeviceWindowHandle = Window.panel2 .Handle; // DeviceWindowHandle, set to the Window's handle. // Create the device. D3DDevice = new Device(new Direct3D (), 0, DeviceType.Hardware, Window.Handle, CreateFlags.HardwareVertexProcessing, presentParams); // Create the vertex buffer and fill with the triangle vertices. (Non-indexed) // Remember 3 vetices for a triangle, 2 tris per quad = 6. Vertices = new VertexBuffer(D3DDevice, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.None, Pool.Managed); DataStream stream = Vertices.Lock(0, 0, LockFlags.None); stream.WriteRange(BuildVertexData()); Vertices.Unlock(); // Create the texture. Image = Texture.FromFile(D3DDevice,filepath ); // Turn off culling, so we see the front and back of the triangle D3DDevice.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting D3DDevice.SetRenderState(RenderState.Lighting, false); ShaderBytecode sbcv = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\vertexShader.vs", "vs_main", "vs_1_1", ShaderFlags.None); constantTable = sbcv.ConstantTable; vertexShader = new VertexShader(D3DDevice, sbcv); ShaderBytecode sbc = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\pixelShader.txt", "ps_main", "ps_3_0", ShaderFlags.None); PixelShader ps = new PixelShader(D3DDevice, sbc); VertexDeclaration vertexDecl = new VertexDeclaration(D3DDevice, new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.PositionTransformed, 0), new VertexElement(0, 12, DeclarationType.Float2 , DeclarationMethod.Default, DeclarationUsage.TextureCoordinate , 0), VertexElement.VertexDeclarationEnd }); Application.EnableVisualStyles(); MessagePump.Run(Window, () => { // Clear the backbuffer to a black color. D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); // Begin the scene. D3DDevice.BeginScene(); // Setup the world, view and projection matrices. //D3DDevice.VertexShader = vertexShader; //D3DDevice.PixelShader = ps; // Render the vertex buffer. D3DDevice.SetStreamSource(0, Vertices, 0, Vertex.SizeBytes); D3DDevice.VertexFormat = Vertex.Format; // Setup our texture. Using Textures introduces the texture stage states, // which govern how Textures get blended together (in the case of multiple // Textures) and lighting information. D3DDevice.SetTexture(0, Image); // Now drawing 2 triangles, for a quad. D3DDevice.DrawPrimitives(PrimitiveType.TriangleList , 0, 2); // End the scene. D3DDevice.EndScene(); // Present the backbuffer contents to the screen. D3DDevice.Present(); }); if (Image != null) Image.Dispose(); if (Vertices != null) Vertices.Dispose(); if (D3DDevice != null) D3DDevice.Dispose(); } private static Vertex[] BuildVertexData() { Vertex[] vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].Tu = 0.0f; vertexData[0].Tv = 0.0f; vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].Tu = 0.0f; vertexData[1].Tv = 1.0f; vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].Tu = 1.0f; vertexData[2].Tv = 0.0f; vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].Tu = 0.0f; vertexData[3].Tv = 1.0f; vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].Tu = 1.0f; vertexData[4].Tv = 1.0f; vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].Tu = 1.0f; vertexData[5].Tv = 0.0f; return vertexData; } } } And my pixel shader and vertex shader code are as following // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; // Global variables sampler2D Tex0; // Name: Simple Pixel Shader // Type: Pixel shader // Desc: Fetch texture and blend with constant color // PS_OUTPUT ps_main( in PS_INPUT In ) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup Out.Color *= float4(0.9f, 0.8f, 0.0f, 1); //do a simple effect return Out; //return output pixel } // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; // Name: Simple Vertex Shader // Type: Vertex shader // Desc: Vertex transformation and texture coord pass-through // VS_OUTPUT vs_main( in VS_INPUT In ) { VS_OUTPUT Out; //create an output vertex Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; //return output vertex }

    Read the article

  • Expectations + Rewards = Innovation

    - by D'Arcy Lussier
    “Innovation” is a heavy word. We regard those that embrace it as “Innovators”. We describe organizations as being “Innovative”. We hold those associated with the word in high regard, even though its dictionary definition is very simple: Introducing something new. What our culture has done is wrapped Innovation in white robes and a gold crown. Innovation is rarely just introducing something new. Innovations and innovators are typically associated with other terms: groundbreaking, genius, industry-changing, creative, leading. Being a true innovator and creating innovations are a big deal, and something companies try to strive for…or at least say they strive for. There’s huge value in being recognized as an innovator in an industry, since the idea is that innovation equates to increased profitability. IBM ran an ad a few years back that showed what their view of innovation is: “The point of innovation is to make actual money.” If the money aspect makes you feel uneasy, consider it another way: the point of innovation is to <insert payoff here>. Companies that innovate will be more successful. Non-profits that innovate can better serve their target clients. Governments that innovate can better provide services to their citizens. True innovation is not easy to come by though. As with anything in business, how well an organization will innovate is reliant on the employees it retains, the expectations placed on those employees, and the rewards available to them. In a previous blog post I talked about one formula: Right Employees + Happy Employees = Productive Employees I want to introduce a new one, that builds upon the previous one: Expectations + Rewards = Innovation  The level of innovation your organization will realize is directly associated with the expectations you place on your staff and the rewards you make available to them. Expectations We may feel uncomfortable with the idea of placing expectations on our staff, mainly because expectation has somewhat of a negative or cold connotation to it: “I expect you to act this way or else!” The problem is in the or-else part…we focus on the negative aspects of failing to meet expectations instead of looking at the positive side. “I expect you to act this way because it will produce <insert benefit here>”. Expectations should not be set to punish but instead be set to ensure quality. At a recent conference I spoke with some Microsoft employees who told me that you have five years from starting with the company to reach a “Senior” level. If you don’t, then you’re let go. The expectation Microsoft placed on their staff is that they should be working towards improving themselves, taking more responsibility, and thus ensure that there is a constant level of quality in the workforce. Rewards Let me be clear: a paycheck is not a reward. A paycheck is simply the employer’s responsibility in the employee/employer relationship. A paycheck will never be the key motivator to drive innovation. Offering employees something over and above their required compensation can spur them to greater performance and achievement. Working in the food service industry, this tactic was used again and again: whoever has the highest sales over lunch will receive a free lunch/gift certificate/entry into a draw/etc. There was something to strive for, to try beyond the baseline of what our serving jobs were. It was through this that innovative sales techniques would be tried and honed, with key servers being top sellers time and time again. At a code camp I spoke at, I was amazed to see that all the employees from one company receive $100 Visa gift cards as a thank you for taking time to speak. Again, offering something over and above that can give that extra push for employees. Rewards work. But what about the fairness angle? In the restaurant example I gave, there were servers that would never win the competition. They just weren’t good enough at selling and never seemed to get better. So should those that did work at performing better and produce more sales for the restaurant not get rewarded because those who weren’t working at performing better might get upset? Of course not! Organizations succeed because of their top performers and those that strive to join their ranks. The Expectation/Reward Graph While the Expectations + Rewards = Innovation formula may seem like a simple mathematics formula, there’s much more going under the hood. In fact there are three different outcomes that could occur based on what you put in as values for Expectations and Rewards. Consider the graph below and the descriptions that follow: Disgruntled – High Expectation, Low Reward I worked at a company where the mantra was “Company First, Because We Pay You”. Even today I still hear stories of how this sentiment continues to be perpetuated: They provide you a paycheck and a means to live, therefore you should always put them as your top priority. Of course, this is a huge imbalance in the expectation/reward equation. Why would anyone willingly meet high expectations of availability, workload, deadlines, etc. when there is no reward other than a paycheck to show for it? Remember: paychecks are not rewards! Instead, you see employees be disgruntled which not only affects the level of production but also the level of quality within an organization. It also means that you see higher turnover. Complacent – Low Expectation, Low Reward Complacency is a systemic problem that typically exists throughout all levels of an organization. With no real expectations or rewards, nobody needs to excel. In fact, those that do try to innovate, improve, or introduce new things into the organization might be shunned or pushed out by the rest of the staff who are just doing things the same way they’ve always done it. The bigger issue for the organization with low/low values is that at best they’ll never grow beyond their current size (and may shrink actually), and at worst will cease to exist. Entitled – Low Expectation, High Reward It’s one thing to say you have the best people and reward them as such, but its another thing to actually have the best people and reward them as such. Organizations with Entitled employees are the former: their organization provides them with all types of comforts, benefits, and perks. But there’s no requirement before the rewards are dolled out, and there’s no short-list of who receives the rewards. Everyone in the company is treated the same and is given equal share of the spoils. Entitlement is actually almost identical with Complacency with one notable difference: just try to introduce higher expectations into an entitled organization! Entitled employees have been spoiled for so long that they can’t fathom having rewards taken from them, or having to achieve specific levels of performance before attaining them. Those running the organization also buy in to the Entitled sentiment, feeling that they must persist the same level of comforts to appease their staff…even though the quality of the employee pool may be suspect. Innovative – High Expectation, High Reward Finally we have the Innovative organization which places high expectations but also provides high rewards. This organization gets it: if you truly want the best employees you need to apply equal doses of pressure and praise. Realize that I’m not suggesting crazy overtime or un-realistic working conditions. I do not agree with the “Glengary-Glenross” method of encouragement. But as anyone who follows sports can tell you, the teams that win are the ones where the coaches push their players to be their best; to achieve new levels of performance that they didn’t know they could receive. And the result for the players is more money, fame, and opportunity. It’s in this environment that organizations can focus on innovation – true innovation that builds the business and allows everyone involved to truly benefit. In Closing Organizations love to use the word “Innovation” and its derivatives, but very few actually do innovate. For many, the term has just become another marketing buzzword to lump in with all the other business terms that get overused. But for those organizations that truly get the value of innovation, they will be the ones surging forward while other companies simply fade into the background. And they will be the organizations that expect more from their employees, and give them their just rewards.

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Chrome and Firefox not able to find some sites after Ubuntu's recent update?

    - by gkr
    Loading of revision3.com in Chrome stops and status saying "waiting for static.inplay.tubemogul.com" grooveshark.com in Chrome never loads But wikipedia.org, google.com works just normal same behavior in Firefox too. I use wired DSL connection in Ubuntu 12.04. I guess this started happening after I upgraded the Chrome browser or Flash plugin few days ago from Ubuntu updates through update-manager. Thanks EDIT: Everything works normal in my WinXP. Problem is only in Ubuntu 12.04. This is what I get when I use wget gkr@gkr-desktop:~$ wget revision3.com --2012-06-12 08:58:01-- http://revision3.com/ Resolving revision3.com (revision3.com)... 173.192.117.198 Connecting to revision3.com (revision3.com)|173.192.117.198|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html' [ ] 81,046 32.0K/s in 2.5s 2012-06-12 08:58:04 (32.0 KB/s) - `index.html' saved [81046] gkr@gkr-desktop:~$ wget static.inplay.tubemogul.com --2012-06-12 08:51:25-- http://static.inplay.tubemogul.com/ Resolving static.inplay.tubemogul.com (static.inplay.tubemogul.com)... 72.21.81.253 Connecting to static.inplay.tubemogul.com (static.inplay.tubemogul.com)|72.21.81.253|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-06-12 08:51:25 ERROR 404: Not Found. grooveshark.com nevers responses so I have to Ctrl+C to terminate wget. gkr@gkr-desktop:~$ wget grooveshark.com --2012-06-12 08:51:33-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~$ This is the apt term.log of updates I mentioned Log started: 2012-06-09 04:51:45 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 161392 files and directories currently installed.) Preparing to replace google-chrome-stable 19.0.1084.52-r138391 (using .../google-chrome-stable_19.0.1084.56-r140965_i386.deb) ... Unpacking replacement google-chrome-stable ... Preparing to replace libpulse0 1:1.1-0ubuntu15 (using .../libpulse0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse0 ... Preparing to replace libpulse-mainloop-glib0 1:1.1-0ubuntu15 (using .../libpulse-mainloop-glib0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse-mainloop-glib0 ... Preparing to replace libpulsedsp 1:1.1-0ubuntu15 (using .../libpulsedsp_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulsedsp ... Preparing to replace sudo 1.8.3p1-1ubuntu3.2 (using .../sudo_1.8.3p1-1ubuntu3.3_i386.deb) ... Unpacking replacement sudo ... Preparing to replace flashplugin-installer 11.2.202.235ubuntu0.12.04.1 (using .../flashplugin-installer_11.2.202.236ubuntu0.12.04.1_i386.deb) ... Unpacking replacement flashplugin-installer ... Preparing to replace pulseaudio-utils 1:1.1-0ubuntu15 (using .../pulseaudio-utils_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-utils ... Preparing to replace pulseaudio 1:1.1-0ubuntu15 (using .../pulseaudio_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio ... Preparing to replace pulseaudio-module-gconf 1:1.1-0ubuntu15 (using .../pulseaudio-module-gconf_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-gconf ... Preparing to replace pulseaudio-module-x11 1:1.1-0ubuntu15 (using .../pulseaudio-module-x11_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-x11 ... Preparing to replace shared-mime-info 1.0-0ubuntu4 (using .../shared-mime-info_1.0-0ubuntu4.1_i386.deb) ... Unpacking replacement shared-mime-info ... Preparing to replace pulseaudio-module-bluetooth 1:1.1-0ubuntu15 (using .../pulseaudio-module-bluetooth_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-bluetooth ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for update-notifier-common ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz Installing from local file /tmp/tmpNrBt4g.gz Flash Plugin installed. Processing triggers for doc-base ... Processing 1 changed doc-base file... Registering documents with scrollkeeper... Setting up google-chrome-stable (19.0.1084.56-r140965) ... Setting up libpulse0 (1:1.1-0ubuntu15.1) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.1) ... Setting up libpulsedsp (1:1.1-0ubuntu15.1) ... Setting up sudo (1.8.3p1-1ubuntu3.3) ... Installing new version of config file /etc/pam.d/sudo ... Setting up flashplugin-installer (11.2.202.236ubuntu0.12.04.1) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up pulseaudio-utils (1:1.1-0ubuntu15.1) ... Setting up pulseaudio (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-gconf (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-x11 (1:1.1-0ubuntu15.1) ... Setting up shared-mime-info (1.0-0ubuntu4.1) ... Setting up pulseaudio-module-bluetooth (1:1.1-0ubuntu15.1) ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-06-09 04:53:32 This is from history.log Start-Date: 2012-06-09 04:51:45 Commandline: aptdaemon role='role-commit-packages' sender=':1.56' Upgrade: shared-mime-info:i386 (1.0-0ubuntu4, 1.0-0ubuntu4.1), pulseaudio:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse-mainloop-glib0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), sudo:i386 (1.8.3p1-1ubuntu3.2, 1.8.3p1-1ubuntu3.3), pulseaudio-module-bluetooth:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-x11:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), flashplugin-installer:i386 (11.2.202.235ubuntu0.12.04.1, 11.2.202.236ubuntu0.12.04.1), pulseaudio-utils:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-gconf:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulsedsp:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), google-chrome-stable:i386 (19.0.1084.52-r138391, 19.0.1084.56-r140965) End-Date: 2012-06-09 04:53:32

    Read the article

  • Data-driven animation states

    - by user8363
    I'm trying to handle animations in a 2D game engine hobby project, without hard-coding them. Hard coding animation states seems like a common but very strange phenomenon, to me. A little background: I'm working with an entity system where components are bags of data and subsystems act upon them. I chose to use a polling system to update animation states. With animation states I mean: "walking_left", "running_left", "walking_right", "shooting", ... My idea to handle animations was to design it as a data driven model. Data could be stored in an xml file, a rdbms, ... And could be loaded at the start of a game / level/ ... This way you can easily edit animations and transitions without having to go change the code everywhere in your game. As an example I made an xml draft of the data definitions I had in mind. One very important piece of data would simply be the description of an animation. An animation would have a unique id (a descriptive name). It would hold a reference id to an image (the sprite sheet it uses, because different animations may use different sprite sheets). The frames per second to run the animation on. The "replay" here defines if an animation should be run once or infinitely. Then I defined a list of rectangles as frames. <animation id='WIZARD_WALK_LEFT'> <image id='WIZARD_WALKING' /> <fps>50</fps> <replay>true</replay> <frames> <rectangle> <x>0</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> <rectangle> <x>45</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> </frames> </animation> Animation data would be loaded and held in an animation resource pool and referenced by game entities that are using it. It would be treated as a resource like an image, a sound, a texture, ... The second piece of data to define would be a state machine to handle animation states and transitions. This defines each state a game entity can be in, which states it can transition to and what triggers that state change. This state machine would differ from entity to entity. Because a bird might have states "walking" and "flying" while a human would only have the state "walking". However it could be shared by different entities because multiple humans will probably have the same states (especially when you define some common NPCs like monsters, etc). Additionally an orc might have the same states as a human. Just to demonstrate that this state definition might be shared but only by a select group of game entities. <state id='IDLE'> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_LEFT'> <event trigger='LEFT_UP' goto='IDLE' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_RIGHT'> <event trigger='RIGHT_UP' goto='IDLE' /> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> </state> These states can be handled by a polling system. Each game tick it grabs the current state of a game entity and checks all triggers. If a condition is met it changes the entity's state to the "goto" state. The last part I was struggling with was how to bind animation data and animation states to an entity. The most logical approach seemed to me to add a pointer to the state machine data an entity uses and to define for each state in that machine what animation it uses. Here is an xml example how I would define the animation behavior and graphical representation of some common entities in a game, by addressing animation state and animation data id. Note that both "wizard" and "orc" have the same animation states but a different animation. Also, a different animation could mean a different sprite sheet, or even a different sequence of animations (an animation could be longer or shorter). <entity name="wizard"> <state id="IDLE" animation="WIZARD_IDLE" /> <state id="MOVING_LEFT" animation="WIZARD_WALK_LEFT" /> </entity> <entity name="orc"> <state id="IDLE" animation="ORC_IDLE" /> <state id="MOVING_LEFT" animation="ORC_WALK_LEFT" /> </entity> When the entity is being created it would add a list of states with state machine data and an animation data reference. In the future I would use the entity system to build whole entities by defining components in a similar xml format. -- This is what I have come up with after some research. However I had some trouble getting my head around it, so I was hoping op some feedback. Is there something here what doesn't make sense, or is there a better way to handle these things? I grasped the idea of iterating through frames but I'm having trouble to take it a step further and this is my attempt to do that.

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Wednesday, September 05, 2012

    CodePlex Daily Summary for Wednesday, September 05, 2012Popular ReleasesDesktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.BLACK ORANGE: HPAD TEXT EDITOR 0.9 Beta: HOW TO RUN THE TEXT EDITOR Download the HPAD ARCHIVED FILES which is in .rar format Extract using Winrar Make sure that extracted files are in the same folder Double-Click on HPAD.exe application fileTelerikMvcGridCustomBindingHelper: Version 1.0.15.247-RC2: TelerikMvcGridCustomBindingHelper 1.0.15.247 RC2 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. This release is all about performance and fixes Support: "Or" and "Does Not contain" filter options Improved BooleanSubstitutes, Custom Aggregates and expressions-to-queryover Add EntityFramework examples in ExampleWebApplication Many other improvements and fixes Fix invalid cast on CustomAggregates Support for ...ServiceMon - Extensible Real-time, Service Monitoring Utility: ServiceMon Release 0.9.0.44: Auto-uploaded from build serverJavaScript Grid: Release 09-05-2012: Release 09-05-2012xUnit.net Contrib: xunitcontrib-dotCover 0.6.1 (dotCover 2.1 beta): xunitcontrib release 0.6.1 for dotCover 2.1 beta This release provides a test runner plugin for dotCover 2.1 beta, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release adds support for running xUnit.net tests to dotCover 2.1 beta's Visual Studio plugin. PLEASE NOTE: You do NOT need this if you also have ReSharper and the existing 0.6.1 release installed. DotCover will use ReSharper's test runners if available. This release includes th...B INI Sharp Library: B INI Sharp Library v1.0.0.0 Realsed: The frist realsedActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsA Simple Eng-Hindi CMS: A simple English- Hindi dual language content management system for small business/personal websites.Active Social Migrator: This project for managing the Active Social migration tool.ANSI Console User Control: Custom console control for .NET WinformsAutoSPInstallerGUI: GUI Configuration Tool for SPAutoInstaller Codeplex ProjectCode Documentation Checkin Policy: This checkin policy for Visual Studio 2012 checks if c# code is documented the way it's configured in the config of the policy. Code Dojo/Kata - Free Time Coding: Doing some katas of the Coding Dojo page. http://codingdojo.org/cgi-bin/wiki.pl?KataCataloguefjycUnifyShow: fjycUnifyShowHidden Capture (HC): HC is simple and easy utility to hidden and auto capture desktop or active windowHRC Integration Services: Fake SQL Server Integration Services. LOLKooboo CMS Sites Switcher: Kooboo CMS Sites SwitcherMod.CookieDetector: Orchard module for detecting whether cookies are enabledMyCodes: Created!MySQL Statement Monitor: MySQL Statement Monitor is a monitoring tool that monitors SQL statements transferred over the network.NeoModulusPIRandom: The idea with PI Random is to use easy string manipulation and simple math to generate a pseudo random number. Net Core Tech - Medical Record System: This is a Medical Record System ProjectOraPowerShell: PowerShell library for backup and maintenance of a Oracle Database environment under Microsoft Windows 2008PinDNN: PinDNN is a module that imparts Pinterest-like functionality to DotNetNuke sites. This module works with a MongoDB database and uses the built-in social relatioPyrogen Code Generator: PyroGen is a simple code generator accepting C# as the markup language.restMs: wil be deleted soonScript.NET: Script.NET is a script management utility for web forms and MVC, using ScriptJS-like features to link dependencies between scripts.SpringExample-Pagination: Simple Spring example with PaginationXNA and Component Based Design: This project includes code for XNA and Component Based Design

    Read the article

  • ???Flashback Log???????Redo Log?

    - by Liu Maclean(???)
    ????????????????????redo log?   RVWR( Recovery Writer)?3s??flashback generate buffer??block before image?????????? ?????block change???RVWR??block before image ?flashback log? ?????????,Oracle???????????before image????????,????????flashback database logs?????   ???????????,????? ??????????????????,???????????before image?????shared pool??flashback log buffer?,RVWR??????flashback log buffer??????????? ?DBWR???????????????,DBWR?????buffer header??FBA(Flashback Byte Address)?flashback log buffer?????????? ???? ?????? ??? ????????????? , RVWR???????????(flashback markers)?flashback database logs?? ????(flashback markers)?????????????Oracle??flashback ??????????  ??????????, Oracle ??????(flashback markers)????????????flashback database log???????????block image; ??Oracle ???????(forward recovery)?????????????????SCN?????? flashback markers for example: **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8132 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 52) **** RECORD HEADER: Type: 7 (Begin Crash Recovery Record) Size: 36 RECORD DATA (Begin Crash Recovery Record): Previous logical record fba: (lno 1 thr 1 seq 1 bno 3 bof 316) Record scn: 0x0000.00000000 [0.0] **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 7868 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 316) **** RECORD HEADER: Type: 2 (Marker) Size: 300 RECORD DATA (Marker): Previous logical record fba: (lno 0 thr 0 seq 0 bno 0 bof 0) Record scn: 0x0000.00000000 [0.0] Marker scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:35 Flag 0x0 Flashback threads: 1, Enabled redo threads 1 Recovery Start Checkpoint: scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:12 thread:1 rba:(0x80.180.10) Flashback thread Markers: Thread:1 status:0 fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) Redo Thread Checkpoint Info: Thread:1 rba:(0x80.180.10) **** Record at fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8168 RECORD DATA (Skip): End-Of-Thread reached ????????????????block change ????before image????????flashback log?? ?????block change???flashback log record ????????? redo log???!????flashback log ???????before image ? redo log??? change vector ?  Oracle?????????????????????????????????????,??????I/O??????????????: ??hot block??,Oracle???????????block image?????; Oracle ?????????(flashback barriers)???????????????,flashback barriers???????(???15??),??????????(flashback barriers)????(flashback markers)????????? ????, ??????change?????, ???????????????????????????, ?15????????????????????flashback log????????before image?????????????,?????????????????????,?????????????? ????????,??????????????(flashback barriers), flashback barriers???????,?????15????? ?????flashback barriers????????(flashback markers)???????????????,???????????????????(????barriers?????)??????block image ,????????????????????????????????? ??????????flashback log????redo log????! ????,????????????????, ?????????? SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table flash_maclean (t1 varchar2(200)) tablespace users; Table created. SQL> insert into flash_maclean values('MACLEAN LOVE HANNA'); 1 row created. SQL> commit; Commit complete. SQL> startup force; ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2233960 bytes Variable Size 713034136 bytes Database Buffers 218103808 bytes Redo Buffers 6123520 bytes Database mounted. Database opened. SQL> update flash_maclean set t1='HANNA LOVE MACLEAN'; 1 row updated. commit; Commit complete. SQL> alter system checkpoint; System altered. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from flash_maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 140431 4 datafile 4 block 140431 ??RDBA rdba: 0x0102248f (4/140431) SQL> ! ps -ef|grep rvwr|grep -v grep oracle 26695 1 0 15:56 ? 00:00:00 ora_rvwr_G11R23 SQL> oradebug setospid 26695 Oracle pid: 20, Unix process pid: 26695, image: [email protected] (RVWR) SQL> ORADEBUG DUMP FBTAIL 1; Statement processed. To dump the last 2000 flashback records , ??ORADEBUG DUMP FBTAIL 1????????2000?????? SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/g11r23/G11R23/trace/G11R23_rvwr_26695.trc ? TRACE?????????block? before image **** Record at fba: (lno 1 thr 1 seq 1 bno 55 bof 2564) **** RECORD HEADER: Type: 1 (Block Image) Size: 28 RECORD DATA (Block Image): file#: 4 rdba: 0x0102248f Next scn: 0x0000.00000000 [0.0] Flag: 0x0 Block Size: 8192 BLOCK IMAGE: buffer rdba: 0x0102248f scn: 0x0000.00609044 seq: 0x01 flg: 0x06 tail: 0x90440601 frmt: 0x02 chkval: 0xc626 type: 0x06=trans data Hex dump of block: st=0, typ_found=1 Dump of memory from 0x00002B1D94183C00 to 0x00002B1D94185C00 2B1D94183C00 0000A206 0102248F 00609044 06010000 [.....$..D.`.....] 2B1D94183C10 0000C626 00000001 00014AD4 0060903A [&........J..:.`.] 2B1D94183C20 00000000 00320002 01022488 00090006 [......2..$......] 2B1D94183C30 00000CC8 00C00340 000D0542 00008000 [[email protected].......] 2B1D94183C40 006040BC 000F000A 00000920 00C002E4 [.@`..... .......] 2B1D94183C50 0017048F 00002001 00609044 00000000 [..... ..D.`.....] 2B1D94183C60 00000000 00010100 0014FFFF 1F6E1F77 [............w.n.] 2B1D94183C70 00001F6E 1F770001 00000000 00000000 [n.....w.........] 2B1D94183C80 00000000 00000000 00000000 00000000 [................] Repeat 500 times 2B1D94185BD0 00000000 00000000 2C000000 4D120102 [...........,...M] 2B1D94185BE0 454C4341 4C204E41 2045564F 4E4E4148 [ACLEAN LOVE HANN] 2B1D94185BF0 01002C41 43414D07 4E41454C 90440601 [A,...MACLEAN..D.] Block header dump: 0x0102248f Object id on Block? Y seg/obj: 0x14ad4 csc: 0x00.60903a itc: 2 flg: E typ: 1 - DATA brn: 0 bdba: 0x1022488 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0x0006.009.00000cc8 0x00c00340.0542.0d C--- 0 scn 0x0000.006040bc 0x02 0x000a.00f.00000920 0x00c002e4.048f.17 --U- 1 fsc 0x0000.00609044 bdba: 0x0102248f data_block_dump,data header at 0x2b1d94183c64 =============== tsiz: 0x1f98 hsiz: 0x14 pbl: 0x2b1d94183c64 76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f77 avsp=0x1f6e tosp=0x1f6e 0xe:pti[0] nrow=1 offs=0 0x12:pri[0] offs=0x1f77 block_row_dump: tab 0, row 0, @0x1f77 tl: 22 fb: --H-FL-- lb: 0x2 cc: 1 col 0: [18] 4d 41 43 4c 45 41 4e 20 4c 4f 56 45 20 48 41 4e 4e 41 end_of_block_dump SQL> select dump('MACLEAN LOVE HANNA',16) from dual; DUMP('MACLEANLOVEHANNA',16) -------------------------------------------------------------------- Typ=96 Len=18: 4d,41,43,4c,45,41,4e,20,4c,4f,56,45,20,48,41,4e,4e,41 ???????????????????????,??flashback log??before image????????? create table flash_maclean1 (t1 int) tablespace users; SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 0 redo size 0 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 49 flashback log write bytes 9306112 SQL> begin 2 for i in 1..5000 loop 3 update flash_maclean1 set t1=t1+1; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 20006 redo size 3071288 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 52 flashback log write bytes 10338304 ??????????? ??hot block,???20006 ?block changes???? ??? 3000k ?redo log ? ??1000k? flashback log ?

    Read the article

  • Failed to resolve artifact. Missing: ---------- 1) org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT

    - by karim
    i want to use the addon vaadin Timeline, so i have to make "gwt-maven-plugin 3.1" as i know ,my pom.xml is the following : <?xml version="1.0"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>life</groupId> <artifactId>life</artifactId> <packaging>war</packaging> <name>life Portlet</name> <version>0.0.1-SNAPSHOT</version> <url>http://maven.apache.org</url> <properties> <vaadin-widgets-dir>src/main/webapp/VAADIN/widgetsets</vaadin-widgets-dir> </properties> <build> <plugins> <plugin> <groupId>com.liferay.maven.plugins</groupId> <artifactId>liferay-maven-plugin</artifactId> <version>6.1.0</version> <configuration> <autoDeployDir>${liferay.auto.deploy.dir}</autoDeployDir> <liferayVersion>6.1.0</liferayVersion> <pluginType>portlet</pluginType> </configuration> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <encoding>UTF-8</encoding> <source>1.5</source> <target>1.5</target> </configuration> </plugin> <plugin> <groupId>com.vaadin</groupId> <artifactId>vaadin-maven-plugin</artifactId> <version>1.0.1</version> </plugin> <!-- Compiles your custom GWT components with the GWT compiler --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>2.1.0-1</version> <configuration> <!-- if you don't specify any modules, the plugin will find them --> <!--modules> .. </modules --> <webappDirectory>${project.build.directory}/${project.build.finalName}/VAADIN/widgetsets</webappDirectory> <extraJvmArgs>-Xmx512M -Xss1024k</extraJvmArgs> <runTarget>clean</runTarget> <hostedWebapp>${project.build.directory}/${project.build.finalName}</hostedWebapp> <noServer>true</noServer> <port>8080</port> <soyc>false</soyc> </configuration> <executions> <execution> <goals> <goal>resources</goal> <goal>compile</goal> </goals> </execution> </executions> </plugin> <!-- Updates Vaadin 6.2+ widgetset definitions based on project dependencies --> <plugin> <groupId>com.vaadin</groupId> <artifactId>vaadin-maven-plugin</artifactId> <version>1.0.1</version> <executions> <execution> <configuration> <!-- if you don't specify any modules, the plugin will find them --> <!-- <modules> <module>${package}.gwt.MyWidgetSet</module> </modules> --> </configuration> <goals> <goal>update-widgetset</goal> </goals> </execution> </executions> </plugin> </plugins> <pluginManagement> <plugins> <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself. --> <plugin> <groupId>org.eclipse.m2e</groupId> <artifactId>lifecycle-mapping</artifactId> <version>1.0.0</version> <configuration> <lifecycleMappingMetadata> <pluginExecutions> <pluginExecution> <pluginExecutionFilter> <groupId> org.codehaus.mojo </groupId> <artifactId> gwt-maven-plugin </artifactId> <versionRange> [2.1.0-1,) </versionRange> <goals> <goal>resources</goal> </goals> </pluginExecutionFilter> <action> <ignore></ignore> </action> </pluginExecution> <pluginExecution> <pluginExecutionFilter> <groupId>com.vaadin</groupId> <artifactId> vaadin-maven-plugin </artifactId> <versionRange> [1.0.1,) </versionRange> <goals> <goal> update-widgetset </goal> </goals> </pluginExecutionFilter> <action> <ignore></ignore> </action> </pluginExecution> </pluginExecutions> </lifecycleMappingMetadata> </configuration> </plugin> </plugins> </pluginManagement> </build> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>portal-service</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>util-bridges</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.vaadin.addons</groupId> <artifactId>vaadin-timeline-agpl-3.0</artifactId> <version>1.2.4</version> </dependency> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>util-taglib</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>util-java</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.portlet</groupId> <artifactId>portlet-api</artifactId> <version>2.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.0</version> <scope>provided</scope> </dependency> <!-- sqx --> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>antlr</groupId> <artifactId>antlr</artifactId> <version>2.7.6</version> <scope>provided</scope> </dependency> <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>asm</groupId> <artifactId>asm</artifactId> <version>1.5.3</version> <scope>provided</scope> </dependency> <dependency> <groupId>asm</groupId> <artifactId>asm-attrs</artifactId> <version>1.5.3</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>1.6.8</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> <version>1.6.8</version> <scope>provided</scope> </dependency> <dependency> <groupId>bsh</groupId> <artifactId>bsh</artifactId> <version>1.3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.1_3</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.3</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>commons-pool</groupId> <artifactId>commons-pool</artifactId> <version>1.5.3</version> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> <version>1.6.1</version> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> <version>1.2.3</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>3.3.1.GA</version> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>1.8.0.10</version> </dependency> <!-- <dependency> <groupId>jboss</groupId> <artifactId>jboss-backport-concurrent</artifactId> <version>2.1.0.GA</version> </dependency> --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-parent</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>javax.jcr</groupId> <artifactId>jcr</artifactId> <version>1.0</version> </dependency> <!-- <dependency> <groupId>javax.sql</groupId> <artifactId>jdbc-stdext</artifactId> <version>2.0</version> </dependency> --> <dependency> <groupId>jdom</groupId> <artifactId>jdom</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>javax.transaction</groupId> <artifactId>jta</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.14</version> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.3</version> </dependency> <dependency> <groupId>com.sun.portal.portletcontainer</groupId> <artifactId>container</artifactId> <version>1.1-m4</version> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>8.4-702.jdbc3</version> </dependency> <!-- sl4j-api-1.5.0 manquante --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-parent</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>3.0.3.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jms</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> <version>1.1</version> <scope>compile</scope> </dependency> <!-- <dependency> <groupId>org.springmodules</groupId> <artifactId>spring-modules-jbpm31</artifactId> <version>0.9</version> <scope>provided</scope> </dependency> --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework.ws</groupId> <artifactId>spring-oxm</artifactId> <version>1.5.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc-portlet</artifactId> <version>2.5</version> </dependency> <dependency> <groupId>com.atomikos</groupId> <artifactId>transactions-hibernate3</artifactId> <version>3.6.4</version> </dependency> <dependency> <groupId>com.atomikos</groupId> <artifactId>transactions-osgi</artifactId> <version>3.7.0</version> </dependency> <dependency> <groupId>com.vaadin</groupId> <artifactId>vaadin</artifactId> <version>6.7.0</version> </dependency> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.3.1</version> </dependency> <!-- this is the dependency to the "jar"-subproject --> <dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1.5.9</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-user</artifactId> <version>2.1.1</version> <scope>provided</scope> </dependency> </dependencies> <!-- Define our plugin repositories --> <pluginRepositories> <pluginRepository> <id>Codehaus</id> <name>Codehaus Maven Plugin Repository</name> <url>http://repository.codehaus.org/org/codehaus/mojo</url> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>codehaus-snapshots</id> <url>[http://nexus.codehaus.org/snapshots]</url> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </pluginRepository> </pluginRepositories> <repositories> <repository> <id>vaadin-addons</id> <url>http://maven.vaadin.com/vaadin-addons</url> </repository> <repository> <id>demoiselle.sourceforge.net</id> <name>Demoiselle Maven Repository</name> <url>http://demoiselle.sourceforge.net/repository/release</url> </repository> </repositories> AND when i do "clean install" to build my mvn , the console show me this taken : [INFO] Unable to find resource 'org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT' in repository demoiselle.sourceforge.net (http://demoiselle.sourceforge.net/repository/release) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. Missing: ---------- 1) org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=org.codehaus.mojo -DartifactId=gwt-maven-plugin - Dversion=1.3-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=org.codehaus.mojo -DartifactId=gwt-maven-plugin -Dversion=1.3-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] Path to dependency: 1) com.vaadin:vaadin-maven-plugin:maven-plugin:1.0.1 2) org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT ---------- 1 required artifact is missing. for artifact: com.vaadin:vaadin-maven-plugin:maven-plugin:1.0.1 from the specified remote repositories: demoiselle.sourceforge.net (http://demoiselle.sourceforge.net/repository/release), central (http://repo1.maven.org/maven2), Codehaus (http://repository.codehaus.org/org/codehaus/mojo), codehaus-snapshots ([http://nexus.codehaus.org/snapshots]), vaadin-snapshots (http://oss.sonatype.org/content/repositories/vaadin-snapshots/), vaadin-releases (http://oss.sonatype.org/content/repositories/vaadin-releases/), vaadin-addons (http://maven.vaadin.com/vaadin-addons) your help will be welcome thank you a lot !!! :)))

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • How do I configure OpenVPN for accessing the internet with one NIC?

    - by Lekensteyn
    I've been trying to get OpenVPN to work for three days. After reading many questions, the HOWTO, the FAQ and even parts of a guide to Linux networking, I cannot get my an Internet connection to the Internet. I'm trying to set up a OpenVPN server on a VPS, which will be used for: secure access to the Internet bypassing port restrictions (directadmin/2222 for example) an IPv6 connection (my client does only have IPv4 connectivity, while the VPS has both IPv4 and native IPv6 connectivity) (if possible) I can connect to my server and access the machine (HTTP), but Internet connectivity fails completely. I'm using ping 8.8.8.8 for testing whether my connection works or not. Using tcpdump and iptables -t nat -A POSTROUTING -j LOG, I can confirm that the packets reach my server. If I ping to 8.8.8.8 on the VPS, I get an echo-reply from 8.8.8.8 as expected. When pinging from the client, I do not get an echo-reply. The VPS has only one NIC: etho. It runs on Xen. Summary: I want to have a secure connection between my laptop and the Internet using OpenVPN. If that works, I want to have IPv6 connectivity as well. Network setup and software: Home laptop (eth0: 192.168.2.10) (tap0: 10.8.0.2) | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | router/gateway (gateway 192.168.2.1) | INTERNET | VPS (eth0:1.2.3.4) (gateway, tap0: 10.8.0.1) (running Debian 6; OpenVPN 2.1.3-2) wifi and my home router should not cause problems since all traffic goes encrypted over UDP port 1194. I've turned IP forwarding on: # echo 1 > /proc/sys/net/ipv4/ip_forward iptables has been configured to allow forwarding traffic as well: iptables -F FORWARD iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -A FORWARD -j DROP I've tried each of these rules separately without luck (flushing the chains before executing): iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j SNAT --to 1.2.3.4 iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE route -n before (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n after (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n before (client): 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 route -n after (client): 1.2.3.4 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 128.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 SERVER config proto udp dev tap ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" ifconfig-pool-persist ipp.txt keepalive 10 120 tls-auth ta.key 0 comp-lzo user nobody group nobody persist-key persist-tun log-append openvpn-log verb 3 mute 10 CLIENT config dev tap proto udp remote 1.2.3.4 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 comp-lzo verb 3 mute 20 traceroute 8.8.8.8 works as expected (similar output without OpenVPN activated): 1 10.8.0.1 (10.8.0.1) 24.276 ms 26.891 ms 29.454 ms 2 gw03.sbp.directvps.nl (178.21.112.1) 31.161 ms 31.890 ms 34.458 ms 3 ge0-v0652.cr0.nik-ams.nl.as8312.net (195.210.57.105) 35.353 ms 36.874 ms 38.403 ms 4 ge0-v3900.cr0.nik-ams.nl.as8312.net (195.210.57.53) 41.311 ms 41.561 ms 43.006 ms 5 * * * 6 209.85.248.88 (209.85.248.88) 147.061 ms 36.931 ms 28.063 ms 7 216.239.49.36 (216.239.49.36) 31.109 ms 33.292 ms 216.239.49.28 (216.239.49.28) 64.723 ms 8 209.85.255.130 (209.85.255.130) 49.350 ms 209.85.255.126 (209.85.255.126) 49.619 ms 209.85.255.122 (209.85.255.122) 52.416 ms 9 google-public-dns-a.google.com (8.8.8.8) 41.266 ms 44.054 ms 44.730 ms If you have any suggestions, please comment or answer. Thanks in advance.

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • Single-port 2600 router with 2900XL switch

    - by Slava Maslennikov
    I have a setup, where the single port 2600 router is in port 0/2 in the switch, outside network is on port 0/1, and the rest (0/3-0/24) should be clients for the second network that would be managed by the 2600 router. I configured everything with two VLANs: 100 for outside (0/2-0/24), 200 for inside (0/1-0/2). 0/2 is a trunk port for the two VLANs. The issue that came about is that I can't have two VLANs on at once: software doesn't allow it. Now, I can ping the outside network devices (172.16.7.1, 172.16.7.103), and even google (8.8.8.8) from the router, but not the switch. Devices on connected get a DHCP lease properly but can't ping outside the network, just the router - 172.17.7.1 and the switch itself, 172.17.7.7. The configuration for both the router and the switch are here, as well as below. Router: rt.throom#sho run Building configuration... Current configuration : 1015 bytes ! version 12.1 no service single-slot-reload-enable service timestamps debug uptime service timestamps log uptime no service password-encryption ! hostname rt.throom ! enable password To053cret ! ! ! ! ! no ip subnet-zero ip dhcp excluded-address 172.17.7.1 172.17.7.2 ip dhcp excluded-address 172.17.7.3 172.17.7.4 ip dhcp excluded-address 172.17.7.5 ! ip dhcp pool VLAN200 network 172.17.7.0 255.255.255.0 default-router 172.17.7.1 dns-server 8.8.8.8 ! ip audit notify log ip audit po max-events 100 ! ! ! ! ! ! ! interface Ethernet0/0 no ip address ! interface Ethernet0/0.100 encapsulation dot1Q 100 ip address 172.16.7.15 255.255.255.0 ip nat outside ! interface Ethernet0/0.200 encapsulation dot1Q 200 ip address 172.17.7.1 255.255.255.0 ip nat inside ! router eigrp 20 network 172.16.0.0 network 172.17.0.0 no auto-summary no eigrp log-neighbor-changes ! no ip classless no ip http server ! access-list 1 permit 172.17.7.0 0.0.0.255 ! ! line con 0 line aux 0 line vty 0 4 login ! end Switch: sw.throom#sho run Building configuration... Current configuration: ! version 11.2 no service pad no service udp-small-servers no service tcp-small-servers ! hostname sw.throom ! enable password Oh5053cret ! ! no spanning-tree vlan 100 no spanning-tree vlan 200 ip subnet-zero ! ! interface VLAN1 no ip address no ip route-cache ! interface FastEthernet0/1 switchport access vlan 100 spanning-tree portfast ! interface FastEthernet0/2 switchport trunk encapsulation dot1q switchport mode trunk ! interface FastEthernet0/3 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/4 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/5 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/6 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/7 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/8 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/9 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/10 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/11 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/12 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/13 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/14 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/15 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/16 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/17 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/18 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/19 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/20 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/21 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/22 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/23 switchport access vlan 200 spanning-tree portfast ! interface FastEthernet0/24 switchport access vlan 200 spanning-tree portfast ! ! line con 0 stopbits 1 line vty 0 4 login line vty 5 9 login ! end sho ip route gives: Gateway of last resort is 172.16.7.1 to network 0.0.0.0 172.17.0.0/24 is subnetted, 1 subnets C 172.17.7.0 is directly connected, Ethernet0/0.200 172.16.0.0/24 is subnetted, 1 subnets C 172.16.7.0 is directly connected, Ethernet0/0.100 S* 0.0.0.0/0 [1/0] via 172.16.7.1

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • OpenVPN stopped working, what could have happened?

    - by jaja
    I have Openvpn, and it worked great when I used it on PC (Windows 8), then I copied all files (Certificates and config) to an Android 4 phone to use them. Now, Openvpn works on the phone, but not the PC. Specifically, when I open Google I get: The server at www.google.com can't be found, because the DNS lookup failed, but the VPN seems to be connected. I have a simple question, could the problem be because I copied the same files? Routing table before connecting:- IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.101 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.101 281 192.168.1.101 255.255.255.255 On-link 192.168.1.101 281 192.168.1.255 255.255.255.255 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.101 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.101 281 =========================================================================== Routing table after connecting:- IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.101 25 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 **.**.***.** 255.255.255.255 192.168.1.254 192.168.1.101 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.101 281 192.168.1.101 255.255.255.255 On-link 192.168.1.101 281 192.168.1.255 255.255.255.255 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.101 281 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Server conf:- port 1194 proto udp dev tun ca ca.crt cert myservername.crt key myservername.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 push "redirect-gateway def1" Client conf:- client dev tun proto udp remote 89.32.148.35 1194 resolv-retry infinite nobind persist-key persist-tun mute-replay-warnings ca ca.crt cert client1.crt key client1.key verb 3 comp-lzo redirect-gateway def1 Here is the log file:- Tue Dec 18 16:34:27 2012 OpenVPN 2.2.2 Win32-MSVC++ [SSL] [LZO2] [PKCS11] built on Dec 15 2011 Tue Dec 18 16:34:27 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Tue Dec 18 16:34:27 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Dec 18 16:34:27 2012 LZO compression initialized Tue Dec 18 16:34:27 2012 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue Dec 18 16:34:27 2012 Socket Buffers: R=[65536-65536] S=[65536-65536] Tue Dec 18 16:34:27 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Tue Dec 18 16:34:27 2012 Local Options hash (VER=V4): '41690919' Tue Dec 18 16:34:27 2012 Expected Remote Options hash (VER=V4): '530fdded' Tue Dec 18 16:34:27 2012 UDPv4 link local: [undef] Tue Dec 18 16:34:27 2012 UDPv4 link remote: ..*.:1194 Tue Dec 18 16:34:27 2012 TLS: Initial packet from ..*.:1194, sid=4d1496ad 2079a5fa Tue Dec 18 16:34:28 2012 VERIFY OK: depth=1, /C=/ST=/L=/O=/OU=/CN=/name=/emailAddress= Tue Dec 18 16:34:28 2012 VERIFY OK: depth=0, /C=/ST=/L=/O=/OU=/CN=/name=/emailAddress= Tue Dec 18 16:34:29 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue Dec 18 16:34:29 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Dec 18 16:34:29 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue Dec 18 16:34:29 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Dec 18 16:34:29 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Tue Dec 18 16:34:29 2012 [myservername] Peer Connection Initiated with ..*.:1194 Tue Dec 18 16:34:32 2012 SENT CONTROL [myservername]: 'PUSH_REQUEST' (status=1) Tue Dec 18 16:34:32 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 10.8.0.1,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: timers and/or timeouts modified Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: --ifconfig/up options modified Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: route options modified Tue Dec 18 16:34:32 2012 ROUTE default_gateway=192.168.1.254 Tue Dec 18 16:34:32 2012 TAP-WIN32 device [Local Area Connection] opened: \.\Global{F0CFEBBF-9B1B-4CFB-8A82-027330974C30}.tap Tue Dec 18 16:34:32 2012 TAP-Win32 Driver Version 9.9 Tue Dec 18 16:34:32 2012 TAP-Win32 MTU=1500 Tue Dec 18 16:34:32 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 10.8.0.6/255.255.255.252 on interface {F0CFEBBF-9B1B-4CFB-8A82-027330974C30} [DHCP-serv: 10.8.0.5, lease-time: 31536000] Tue Dec 18 16:34:32 2012 Successful ARP Flush on interface [26] {F0CFEBBF-9B1B-4CFB-8A82-027330974C30} Tue Dec 18 16:34:37 2012 TEST ROUTES: 2/2 succeeded len=1 ret=1 a=0 u/d=up Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD ..*. MASK 255.255.255.255 192.168.1.254 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=25 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 0.0.0.0 MASK 128.0.0.0 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 128.0.0.0 MASK 128.0.0.0 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 10.8.0.1 MASK 255.255.255.255 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 Initialization Sequence Completed

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92  | Next Page >