Search Results

Search found 53148 results on 2126 pages for 'ubuntu 10 4'.

Page 42/2126 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Exim4 delivery to parent domain.

    - by bruor
    I've set up an ubuntu server 9.1 system with exim4 as a relay for e-mail on my network. I've told exim that it is part of a subdomain: sub.domain.com I can test and deliver messages fine to my gmail accounts. I cannot get exim4 to sent messages to [email protected] though. The error received in the logs shows that exim thinks it should be delivering messages for domain.com to localhost instead of the actual MX for domain.com Is there an easy way to modify the debconf update-exim4.conf.conf so that is has the relay_to_domains capability?

    Read the article

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • Change Gnome-Panel profile at startup according to number of displays

    - by ifischer
    I'm running Ubuntu 10.04 on a Laptop. I have a startup script which enables external displays, if they are connected. It runs at GDM startup, configured in /etc/gdm/Init/Default. When i'm running without external displays, Gnome should use 2 Panels. When i'm using 2 external displays, Gnome should add an additional Panel to the second display. But this should of course be removed again if i detach the external displays (and restart). Can i configure this use case by using Gnome Panel-profiles? I read that there is a startup option "--profile" for gnome-panel, but i don't know how and where i could switch the profile, especially because this has to be done after recognizing the number of displays. Or can i add a general Gnome profile and switch between those profiles somehow to achieve this behavior?

    Read the article

  • APress Deal of the Day 10/August/2014 - Pro ASP.NET Web API Security

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/10/apress-deal-of-the-day-10august2014---pro-asp.net-web.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430257820 is Pro ASP.NET Web API Security. “ASP.NET Web API is a key part of ASP.NET MVC 4. It has become the platform of choice for building RESTful services. Securing ASP.NET Web API applications requires a move away from traditional WCF-based techniques in favor of new SOAP-less methods. The evaluation, selection and analysis of these new techniques is the focus of this book.”

    Read the article

  • Is the field BusID necessary in XF86Config?

    - by Greg
    Hello, I am using a cluster of machines running on Ubuntu 10.04 LTS which are supposed to be homogeneous, but apparently they are not. In particular, I am configuring the X server on these machines, and I pushed a /etc/X11/XF86Config that includes the following section: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" EndSection The problem is that the BusID of the graphic card is PCI:5:0:0 for some machines, and PCI:3:0:0 for others. Is there a way that the X server automatically detects the appropriate Device (based on the name for instance)? Thanks,

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • Change Gnome panel profile according to number of displays

    - by ifischer
    I'm running Ubuntu 10.04 on a Laptop. I have a startup script which enables external displays, if they are connected. It runs at GDM startup, configured in /etc/gdm/Init/Default. When i'm running without external displays, Gnome should use 2 Panels. When i'm using 2 external displays, Gnome should add an additional Panel to the second display. But this should of course be removed again if i detach the external displays (and restart). Can i configure this use case by using Gnome Panel-profiles? I read that there is a startup option "--profile" for gnome-panel, but i don't know how and where i could switch the profile, especially because this has to be done after recognizing the number of displays. Or can i add a general Gnome profile and switch between those profiles somehow to achieve this behavior?

    Read the article

  • Two timer applets in notification area

    - by 1passenger
    Hi, after installig Ubuntu 10.04 Remix I can see two timer applets in the notification area. When I click on the first one, I can see the current date and the menues "Open Calendar", "Set Time and Date". When I click on the second one, I can see a small calendar of the current month and a small world map with my defined locations. I just want to have only one timer to safe some space in the notification area. How to disable one of the two applets? To click "Remove from Panel" isn't possible in the Remix edition!

    Read the article

  • Configuring Mail Relay

    - by ServerChecker
    I'm running Ubuntu Server 9.10 with Postfix and Webmin. I have created virtual hosts for 3 domains following this serverfault.com answer. But the mail isn't relaying out to the world. I have 3 domains tied into my DNS in webmin, as well as inside DNS clicked Mail Server and followed that instruction using this article on the web. The domains and the web servers work just fine. I also have FTP working just fine. So, the remaining problem I have is mail. Can't forward mail out to a Gmail account for some reason. Note I'm just trying to do the "easy version" of Postfix config and if your answer is in Webmin-ease, that would help me. However, I can edit a text file if you suggest.

    Read the article

  • Gnome keyboard shortcuts dysfunctional after waking up from sleep

    - by dennis2008
    Hello, I am using Ubuntu 9.10 on my Thinkpad T61. Very often after the system woke up from sleep, all the gnome shortcuts went dead (no effect when key combinations pressed). Things like "Alt+F4, Alt+TAB" are dead; things like Ctrl+C/Ctrl+V are OK; buttons such as volume up/down also OK; I tried to Google for answers but wasn't lucky enough. Any idea how to solve it? Thanks! If it's not preventable, can I at least restore them without restarting the session?

    Read the article

  • Verizon Fivespot firewall

    - by Patrick
    I have a Verizon Fivespot Wi-Fi router and am having issues connecting to the computer that uses it to get on the internet. I am able to connect to the Fivespot admin pages remotely and I am able to connect to the internet from the computer behind the Fivespot. There are two sections pertinent to this issue, Port Filtering And, Port Forwarding I've tried each individually and both together but cannot access anything through the router except for the admin page. I am trying to connect through SSH to an Ubuntu 10.04 box over wifi. I have called Verizon Tech Support but they were unhelpful, the person essentially read what it says on each screen without any elaboration. Any help is greatly appreciated!

    Read the article

  • Ubuntu 10.10 Ad-Hoc Setup (from Wireless Router, to Ubuntu Server/Desktop to Wireless Router)

    - by user60375
    Okay, so I know there are different approaches for this, but I will explain my story briefly before getting to the technical stuff. My fiancée and I are going through some financial issues (as I assume a lot of us are). We ended up having to move from our house and stay with some friends/family for 6 months, just to get ourselves caught up. (Medical bills, among other issues,etc). So this is where it gets fun. At our friends house, we are staying in the loft setup which is not near the cable modem and wireless router. I have a "hand-crafted" media center running XBMC, an Ubuntu 10.10 Server/Desktop (multi-purpose, very powerful and tons drive space), two working laptops, a between the two of us we have multiple wireless devices/phones. Now our friends Wireless router doesn't have any options for assigning IP addresses, but my router does. My current setup is: Friends Cable Modem -- Friend's Wireless Router -- Ubuntu 10.10 Server -- My Wireless Router (local-link from Friend's wireless (incoming) to sharing connection on ETH0 (outgoing)) -- to all devices. (Wireless Modem, Ubuntu Server that share's it's wireless incoming connection to the ethernet port my Wireless router share's with the rest of the devices). I setup my router to use default settings from my friend's router, using Google's DNS on my router (disabled DNS setup on Ubuntu Server), everything is assigned nicely and runs smooth. My Ubuntu server was given the address 10.42.43.1 (assuming standard from Network-Manager). (On the Ubuntu machine that shares to my wireless router; I have some server apps installed, but mainly just use Samba/NFS/Tangerine action. My problem/goal is that every device has no problem of accessing the internet from my router, the media-center has an assigned ip address, all services from all devices (ZeroConf, Avanhi, Bonjour, GIT, SSH, FTP, Apache2, etc) all work correctly except from my Ubuntu Server (which serves the wireless connection to ETH0 to another Wireless Router). The Ubuntu 10.10 Server/Desktop is not broadcasting anything (the Zeroconf Service Discovery 0.4 Gnome Applet shows the services from the Ubuntu server but no other computers can see them). I can access it from my Media-Center (Running Xbuntu 10.04) if I direct it to 10.42.43.1, no problem. But I cannot access Tangerine (Daapd) and the Samba shares do not show up on any computers for 10.42.43.1 (not in the WORKGROUP which Samba is setup simple and default but I can direct computers to that address and the shares will add except on a damn Windows 7 parition). Is this an issue with how I have my router setup and possible the gateway? An issue with Network-Manager? And issue with my Ubuntu Server/Desktop? I know there is a lot to that, but it's simpler than I probably have explained? Any help would be appreciated. If you need more details, I can provide them. If there is a better way of my attempting this home-network, please let me know. Thanks in advance for the help.

    Read the article

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • HAProxy: Display a "BADREQ" | BADREQ's by the thousands

    - by GruffTech
    My HAProxy Configuration. #HA-Proxy version 1.3.22 2009/10/14 Copyright 2000-2009 Willy Tarreau <[email protected]> global maxconn 10000 spread-checks 50 user haproxy group haproxy daemon stats socket /tmp/haproxy log localhost local0 log localhost local1 notice defaults mode http maxconn 50000 timeout client 10000 option forwardfor except 127.0.0.1 option httpclose option httplog listen dcaustin 0.0.0.0:80 mode http timeout connect 12000 timeout server 60000 timeout queue 120000 balance roundrobin option httpchk GET /index.html log global option httplog option dontlog-normal server web1 10.10.10.101:80 maxconn 300 check fall 1 server web2 10.10.10.102:80 maxconn 300 check fall 1 server web3 10.10.10.103:80 maxconn 300 check fall 1 server web4 10.10.10.104:80 maxconn 300 check fall 1 listen stats 0.0.0.0:9000 mode http balance log global timeout client 5000 timeout connect 4000 timeout server 30000 stats uri /haproxy HAProxy is running, and the socket is working... adam@dcaustin:/etc/haproxy# echo "show info" | socat stdio /tmp/haproxy Name: HAProxy Version: 1.3.22 Release_date: 2009/10/14 Nbproc: 1 Process_num: 1 Pid: 6320 Uptime: 0d 0h14m58s Uptime_sec: 898 Memmax_MB: 0 Ulimit-n: 20017 Maxsock: 20017 Maxconn: 10000 Maxpipes: 0 CurrConns: 47 PipesUsed: 0 PipesFree: 0 Tasks: 51 Run_queue: 1 node: dcaustin desiption: Errors show nothing from socket... adam@dcaustin:/etc/haproxy# echo "show errors" | socat stdio /tmp/haproxy adam@dcaustin:/etc/haproxy# However... My Error log is exploding with "badrequests" with the Error code cR. cR (according to 1.3 documentation) is The "timeout http-request" stroke before the client sent a full HTTP request. This is sometimes caused by too large TCP MSS values on the client side for PPPoE networks which cannot transport full-sized packets, or by clients sending requests by hand and not typing fast enough, or forgetting to enter the empty line at the end of the request. The HTTP status code is likely a 408 here. Correct on the 408, but we're getting literally thousands of these requests every hour. (This log snippet is an clip for about 10 seconds of time...) Jun 30 11:08:52 localhost haproxy[6320]: 92.22.213.32:26448 [30/Jun/2011:11:08:42.384] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10002 408 212 - - cR-- 35/35/18/0/0 0/0 "<BADREQ>" Jun 30 11:08:54 localhost haproxy[6320]: 71.62.130.24:62818 [30/Jun/2011:11:08:44.457] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 39/39/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 84.73.75.236:3589 [30/Jun/2011:11:08:45.021] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10008 408 212 - - cR-- 35/35/15/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 69.39.20.190:49969 [30/Jun/2011:11:08:45.709] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 37/37/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:56 localhost haproxy[6320]: 2.29.0.9:58772 [30/Jun/2011:11:08:46.846] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 43/43/22/0/0 0/0 "<BADREQ>" Jun 30 11:08:57 localhost haproxy[6320]: 212.139.250.242:57537 [30/Jun/2011:11:08:47.568] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 42/42/21/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55046 [30/Jun/2011:11:08:48.559] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 46/46/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55044 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10004 408 212 - - cR-- 45/45/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55045 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10005 408 212 - - cR-- 44/44/24/0/0 0/0 "<BADREQ>" Jun 30 11:09:00 localhost haproxy[6320]: 68.197.56.2:52781 [30/Jun/2011:11:08:50.975] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 49/49/28/0/0 0/0 "<BADREQ>" From what I read on google, if i wanted to see what the bad requests are, I can show errors to the socket and it will spit them out. We do run a pretty heavily trafficed website and the percentage of "BADREQS" to normal requests is quite low, but I'd like to be able to get ahold of what that request WAS so I can debug it. stats # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max, dcaustin,FRONTEND,,,64,120,50000,88433,105889100,2553809875,0,0,4641,,,,,OPEN,,,,,,,,,1,1,0,,,,0,45,0,128, dcaustin,web1,0,0,10,28,300,20941,25402112,633143416,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,1,,20941,,2,11,,30, dcaustin,web2,0,0,9,30,300,20941,25026691,641475169,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,2,,20941,,2,11,,30, dcaustin,web3,0,0,10,27,300,20940,30116527,635015040,,0,,0,9,0,0,UP,1,1,0,0,0,2208,0,,1,1,3,,20940,,2,10,,31, dcaustin,web4,0,0,5,28,300,20940,25343770,643209546,,0,,0,8,0,0,UP,1,1,0,0,0,2208,0,,1,1,4,,20940,,2,11,,31, dcaustin,BACKEND,0,0,34,95,50000,83762,105889100,2553809875,0,0,,0,34,0,0,UP,4,4,0,,0,2208,0,,1,1,0,,83762,,1,43,,122, 88500 "Sessions" and 4500 errors. in the last 20 minutes.

    Read the article

  • Running gdb on Ubuntu 9.10 Apache2 Install

    - by AJ
    Hi all, I am trying to run gdb to debug my Ubuntu 9.10 Apache2 install and having a couple of problems: It seems like the package installed by Ubuntu for Apache2 does not include debugging symbols; is there a different version of the package I should be using for developing/debugging? When I try to run gdb, I get an error that looks to be caused by some missing environment variable. Are there additional options I should pass to "run" to get this to work? Here is the output of the debugger session: root@aj-ubuntu:/usr/sbin# gdb apache2 GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/sbin/apache2...(no debugging symbols found)...done. (gdb) run -X Starting program: /usr/sbin/apache2 -X [Thread debugging using libthread_db enabled] apache2: bad user name ${APACHE_RUN_USER} Program exited with code 01. (gdb) Thanks in advance, -aj

    Read the article

  • Ubuntu server failing daily

    - by deanvz
    Symptoms: Server becomes unresponsive - Increase in load, all services stop Loss of connectivity - Ping/SSH Flush MySQL hosts after reboot - As MySQL refuses new connections Intermittent Apache crashes Generally happens early morning hours - 2 days of the week are however excluded Changes made: Updated the OS - to Ubuntu 10.04.4 LTS Not sure if the MySQL server was also updated in the process Current MySQL version - mysql Ver 14.14 Distrib 5.1.63, for debian-linux-gnu (x86_64) using readline 6.1 Updated Plesk from 10.4.4 Update #47 to 11.0.9 Update #23 Rebooted on almost daily basis All crons stopped for the times corresponding to the server crashes Created a MySQL log to monitor the lock times on queries Possible causes: Failing hardware Incorrect software configuration (MySQL, Apache etc) Responsibilities: Small webserver Runs our billing system - WHMCS Responsible for CRONs Bulk-email solution - No delivery times coincide with server crashes Proposed solutions: Move machine over to VM Format and restore the Plesk server backup and take it from there? Side notes: Seems to be a general Apache failure across all our linux servers - Intermittent problem Are we doing something fundamentally wrong in the Apache config? (I understand that this is a secondary question, just making sure that it isnt possibly holding any relevance)

    Read the article

  • Can't get .htaccess to work

    - by orokusaki
    I'm using Apache2 on Ubuntu Lucid Lynx. I have config set to use .htaccess like normal. This is my default site: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> I've tried lower case "all" (AllowOverride all) as well. My .htaccess file looks like this: //Rewrite all requests to www Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^mydomain.com [nc] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [r=301,nc] //301 Redirect "old_junk.html" File to "new_junk.html" Redirect 301 /old_junk.html /new_junk.html //301 Redirect Entire Directory "old_junk/" to "new_junk/" RedirectMatch 301 /old_junk/(.*) /new_junk//$1 // Copy and paste redirect examples from above: (with mydomain replaced with my actual domain... and my computer is plugged in)

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • Are there any problems migrating users from OSX 10.4.11 server to 10.6.6. server using WGM?

    - by Geoff Hardman
    I have just set up a new Mac server running 10.6.6. Installation went smoothly and all appears ok. Can create a new user through WGM and logon to the server from one of our local client systems. Imported the old users from the 10.4 system using WGM but the new system will not create home directories for these users and will not let them log on. I have read that there are issues using this method but can not find any detail as to why. Not wishing to recreate over 350 users from scratch I am looking for an easier solution. Can anyone help? The new server is 10.6.6. We use Open Directory and AFP. The paths are the same as the old server.

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Change the order of IP addresses returned by ifconfig?

    - by erikcw
    I have an Ubuntu server with several IP addresses attached to it. 127.0.0.1 is listed as venet0 by ifconfig. I'm using Chef to configure the server. The problem is that chef is listing 127.0.0.1 as the IP address for the server instead of one of the server's "real" IPs. (apparent "ohai ipaddress" uses the first IP listed by ifconfig to determine the server's IP). How can I change the order so the servers main IP is listed first instead of the 127.0.0.1? Can venet0 be deleted and venet0:0 be "promoted" to take its place since 127.0.0.1 is already listed in the "lo" interface? lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16700 (16.7 KB) TX bytes:16700 (16.7 KB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:7622207 errors:0 dropped:0 overruns:0 frame:0 TX packets:8183436 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2102750761 (2.1 GB) TX bytes:2795213667 (2.7 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX1 P-t-P:XXX.XXX.XXX.XX1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX2 P-t-P:XXX.XXX.XXX.XX2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 route -n route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 0.0.0.0 192.0.2.1 0.0.0.0 UG 0 0 0 venet0

    Read the article

  • Cisco PIX 515 doesn't seem to be passing traffic through according to static route

    - by Liquidkristal
    Ok, so I am having a spot of bother with a Cisco PIX515, I have posted the current running config below, now I am no cisco expert by any means although I can do basic stuff with them, now I am having trouble with traffic sent from the outside to address: 10.75.32.25 it just doesn't appear to be going anywhere. Now this firewall is deep inside a private network, with an upstream firewall that we don't manage. I have spoken to the people that look after that firewall and they say they they have traffic routing to 10.75.32.21 and 10.75.32.25 and thats it (although there is a website that runs from the server 172.16.102.5 which (if my understanding is correct) gets traffic via 10.75.32.23. Any ideas would be greatly appreciated as to me it should all just work, but its not (obviously if the config is all correct then there could be a problem with the web server that we are trying to access on 10.75.32.25, although the users say that they can get to it internally (172.16.102.8) which is even more confusing) PIX Version 6.3(3) interface ethernet0 auto interface ethernet1 auto interface ethernet2 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif ethernet2 academic security50 fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names name 195.157.180.168 outsideNET name 195.157.180.170 globalNAT name 195.157.180.174 gateway name 195.157.180.173 Mail-Global name 172.30.31.240 Mail-Local name 10.75.32.20 outsideIF name 82.219.210.17 frogman1 name 212.69.230.79 frogman2 name 78.105.118.9 frogman3 name 172.16.0.0 acadNET name 172.16.100.254 acadIF access-list acl_outside permit icmp any any echo-reply access-list acl_outside permit icmp any any unreachable access-list acl_outside permit icmp any any time-exceeded access-list acl_outside permit tcp any host 10.75.32.22 eq smtp access-list acl_outside permit tcp any host 10.75.32.22 eq 8383 access-list acl_outside permit tcp any host 10.75.32.22 eq 8385 access-list acl_outside permit tcp any host 10.75.32.22 eq 8484 access-list acl_outside permit tcp any host 10.75.32.22 eq 8485 access-list acl_outside permit ip any host 10.75.32.30 access-list acl_outside permit tcp any host 10.75.32.25 eq https access-list acl_outside permit tcp any host 10.75.32.25 eq www access-list acl_outside permit tcp any host 10.75.32.23 eq www access-list acl_outside permit tcp any host 10.75.32.23 eq https access-list acl_outside permit tcp host frogman1 host 10.75.32.23 eq ssh access-list acl_outside permit tcp host frogman2 host 10.75.32.23 eq ssh access-list acl_outside permit tcp host frogman3 host 10.75.32.23 eq ssh access-list acl_outside permit tcp any host 10.75.32.23 eq 2001 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp any host 10.75.32.23 eq smtp access-list acl_outside permit tcp any host 10.75.32.23 eq ssh access-list acl_outside permit tcp any host 10.75.32.24 eq ssh access-list acl_acad permit icmp any any echo-reply access-list acl_acad permit icmp any any unreachable access-list acl_acad permit icmp any any time-exceeded access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq www access-list acl_acad deny tcp any any eq www access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq https access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq 8080 access-list acl_acad permit tcp host 172.16.102.5 host 10.64.1.115 eq smtp pager lines 24 logging console debugging mtu outside 1500 mtu inside 1500 mtu academic 1500 ip address outside outsideIF 255.255.252.0 no ip address inside ip address academic acadIF 255.255.0.0 ip audit info action alarm ip audit attack action alarm pdm history enable arp timeout 14400 global (outside) 1 10.75.32.21 nat (academic) 1 acadNET 255.255.0.0 0 0 static (academic,outside) 10.75.32.22 Mail-Local netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.30 172.30.30.36 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.23 172.16.102.5 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.24 172.16.102.6 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.25 172.16.102.8 netmask 255.255.255.255 0 0 access-group acl_outside in interface outside access-group acl_acad in interface academic route outside 0.0.0.0 0.0.0.0 10.75.32.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius aaa-server LOCAL protocol local snmp-server host outside 172.31.10.153 snmp-server host outside 172.31.10.154 snmp-server host outside 172.31.10.155 no snmp-server location no snmp-server contact snmp-server community CPQ_HHS no snmp-server enable traps floodguard enable telnet 172.30.31.0 255.255.255.0 academic telnet timeout 5 ssh timeout 5 console timeout 0 terminal width 120 Cryptochecksum:hi2u : end PIX515#

    Read the article

  • erlyvideo server doesn't start automatically after reboot

    - by electroid
    I have installed erlyvideo server on ubuntu 9.10 karmic koala. Everything works fine, but after server reboot I have to start erlyvideo server manually with /etc/init.d/erlyvideo start. I try allready update-rc.d and I think erlyvideo by default should start automaticaly. Any help will be appreciated. Here erlyvideo startup script located in /etc/init.d/erlyvideo: #!/bin/sh ### BEGIN INIT INFO # Provides: erlyvideo # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the erlyvideo streaming server # Description: starts the erlyvideo using erlang system ### END INIT INFO case "$1" in start) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; stop) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; soft-restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; upgrade) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reconfigure) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reboot) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; ping) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; console) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach-erl) cd /opt/erlyvideo && ./erts-5.8.4/bin/erl -name [email protected] -remsh [email protected] ;; *) echo $"Usage: $0 {start|stop|restart|soft-restart|upgrade|reboot|ping|console|attach}" exit 1 esac exit 0 And I have found S91erlyvideo in /etc/rc2.d next to S91apache2 which starts just fine on every reboot.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >