Search Results

Search found 27958 results on 1119 pages for 'failed to load viewstate'.

Page 156/1119 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • How do I load every UserForm without having to call .Show individually?

    - by Daniel Cook
    I wanted to figure out how you could load every UserForm without having to call Userform1.Show UserForm2.Show etc. This was inspired by comments on this answer: Excel VBA UserForm_Initialize() in Module. I found this method suggested in a few places: Sub OpenAllUserForms() Dim uf As UserForm For Each uf In UserForms uf.Show Next End Sub However, no Userforms display regardless of how many are attached to the workbook. When I stepped through the code I determined that the UserForms collection is empty! How can I load each Userform without having to explicitly show each one?

    Read the article

  • How does the WP7 Pivot control dynamically load pivot items?

    - by mztan
    IIRC, the Pivot control only loads a child PivotItem if it is the currently shown child. I would then guess that the previously seen child is also somehow unloaded, presumably still stored in memory, but hidden from the UI. What I'm wondering is, how does the Pivot control dynamically load/unload a child control, and can that behavior be imitated within a custom UserControl? As for unloading, is it as simple as collapsing the previous child's visibility, or is something trickier going on? That is to say, supposing I use my own UserControl like: <my:CustomUserControl> <TextBlock x:Name="_textBlock" Text="wait for it ..." /> </my:CustomUserControl> Normally, the child TextBlock is instantiated when the surrounding PhoneApplicationPage is instantiated, via InitializeComponent and all that. Is there any way to postpone this behavior and load the child programmatically?

    Read the article

  • Silverlight client load operation failed for query "Login". [GenericParameterNotValid]

    - by cvoeller
    I have one user who gets the following error when trying to login, "Silverlight client load operation failed for query "Login". [GenericParameterNotValid]". The odd thing is that other users are able to login without issue and I can login using the "problem" account from other machines. At this point I think it's got to be a client side configuration issue. My next step is to confirm the Client Side Silverlight Version, but I don't know where to go after that. Do you have any suggestions?

    Read the article

  • How to handle this "session failed to write file" error in PHP?

    - by alex
    I am using the Kohana 3 framework, and am using the native session driver. For some reason, occasionally the sessions fail to write to their file. Warning: session_start() [function.session-start]: open(/tmp/sess_*****, O_RDWR) failed: Permission denied (13) in /home/site/public_html/system/classes/kohana/session/native.php on line 27 I am pretty sure Kohana has its own in built error handler, but it is not triggered with this error (i.e. it shows up like a normal PHP error, not the Kohana error). Anyone that has ever used Kohana will notice this seems to have bypassed Kohana's error handling (perhaps set with set_error_handler()). Is there anyway to stop this error from appearing without switching from the native session (i.e. file based) driver? Should I just give good practice the boot and append an @ error suppressor to session_start() in the core code of Kohana? Should I relax the error_reporting()? Thanks

    Read the article

  • Varnish waits for the complete page load before sending response to browser.

    - by Track
    I've setup varnish to sit in front of a tomcat server. What I've noticed is that Varnish seems to wait for the complete page to load (all css, js, etc) before it sends any response to the browser. This causes a huge lag before the user sees anything. If I bypass Varnish and go directly to the site, it responds immediately. While the total page load time might be similar, the perception is that the site is slow. Has anyone faced this?

    Read the article

  • Load the main page if user visits a page meant for an ajax request?

    - by QAZ
    Hello, I am using jQuery for a simple website and have a main page 'index.html' which can load some content (e.g. 'info1.html' or 'info2.html') via jQuery ajax requests, and display the result of these requests inside an element in the 'index.html' page. If a user somehow visits say 'info1.html' directly, is their a way to redirect or load the main 'index.html' page? (or what's best practice for this type of thing) as Google is indexing all the small html files used for the ajax requests and sometimes a user can click into the site via these pages. Thanks.

    Read the article

  • Load the main page if user visits a page ment for an ajax request?

    - by QAZ
    Hello, I am using jQuery for a simple website and have a main page 'index.html' which can load some content (e.g. 'info1.html' or 'info2.html') via jQuery ajax requests, and display the result of these requests inside an element in the 'index.html' page. If a user somehow visits say 'info1.html' directly, is their a way to redirect or load the main 'index.html' page? (or what's best practise for this type of thing) as Google is indexing all the small html files used for the ajax requests and sometimes a user can click into the site via these pages. Thanks.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • VS2010 compiles solution without errors, msbuild fails: "fatal error CS0002: Unable to load message string from resources"

    - by Nathan Ridley
    I'm having a lot of trouble trying to track down the cause of this error message. I have a large Visual Studio 2010 solution which compiles without error on my local machine but on the build server, msbuild fails on one of the projects with the error: fatal error CS0002: Unable to load message string from resources Here's the red error section at the end: Build FAILED. "C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj" (default target) (9) -> (CoreCompile target) -> CSC : fatal error CS0002: Unable to load message string from resources. [C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj] 0 Warning(s) 1 Error(s) The entire msbuild output from the build server is here: http://pastie.org/3660842 What does the error generally refer to, that would cause it to build locally but not on the build server? UPDATE I have just run msbuild /version on both machines and it turns out the .net framework versions are very slightly different. Local machine is 4.0.30319.488 and build server is 4.0.30319.1. I'm about to run windows update on the server to allow it to install some updates, as several seem to be .net framework-related, so I'll see if that makes a difference. UPDATE Installing the updates didn't help. Just remembered I copied up csc.exe from the async preview a little while ago in order to facilitate async compilation (the actual async preview had failed to install on the server due to visual studio not being there, but installing visual studio team viewer seems to have fixed that, so i've just run the proper async ctp3 installer to see if that makes a difference.

    Read the article

  • Update packages on very old ubuntu

    - by meewoK
    I want to add Mysqli support to a machine running: Server Version: Apache/2.2.4 (Ubuntu) PHP/5.2.3-1ubuntu6.3 I would rather not update more things then I need to. I run the following: sudo apt-get install php5-mysql However, as the ubuntu version is old I get the following. WARNING: The following packages cannot be authenticated! php5-cli php5-mysql php5-mhash php5-xsl php5-pspell php5-snmp php5-curl php5-xmlrpc php5-sqlite php5-gd libapache2-mod-php5 php5-common Install these packages without verification [y/N]? Y Err http://gr.archive.ubuntu.com gutsy-updates/main php5-cli 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-cli 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-mysql 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-mhash 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-xsl 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-pspell 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-snmp 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-curl 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-xmlrpc 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-sqlite 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-gd 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main libapache2-mod-php5 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-common 5.2.3-1ubuntu6.4 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-cli_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-mysql_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-mhash_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-xsl_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-pspell_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-snmp_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-curl_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-xmlrpc_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-sqlite_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-gd_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/libapache2-mod-php5_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-common_5.2.3-1ubuntu6.4_i386.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Questions Can I add mysqli feature using another method instead of sudo-apt get? Even if successful can this break something on the system? Update: I have tried to add additional sources using the instructions from: http://superuser.com/questions/339537/where-can-i-get-therepositories-for-old-ubuntu-versions I have the following in the /etc/apt/sources.list file: # deb cdrom:[Ubuntu-Server 7.10 _Gutsy Gibbon_ - Release i386 (20071016)]/ gutsy main restricted #deb cdrom:[Ubuntu-Server 7.10 _Gutsy Gibbon_ - Release i386 (20071016)]/ gutsy main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gr.archive.ubuntu.com/ubuntu/ gutsy main restricted universe multiverse deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-backports main restricted universe multiverse deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates main restricted deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## universe WILL NOT receive any review or updates from the Ubuntu security ## team. deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates universe deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. #deb http://gr.archive.ubuntu.com/ubuntu/ gutsy multiverse deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy multiverse deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates multiverse deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-backports main restricted universe multiverse # deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. This software is not part of Ubuntu, but is ## offered by Canonical and the respective vendors as a service to Ubuntu ## users. # deb http://archive.canonical.com/ubuntu gutsy partner # deb-src http://archive.canonical.com/ubuntu gutsy partner deb http://security.ubuntu.com/ubuntu gutsy-security main restricted deb-src http://security.ubuntu.com/ubuntu gutsy-security main restricted deb http://security.ubuntu.com/ubuntu gutsy-security universe deb-src http://security.ubuntu.com/ubuntu gutsy-security universe deb http://security.ubuntu.com/ubuntu gutsy-security multiverse deb-src http://security.ubuntu.com/ubuntu gutsy-security multiverse # Required deb http://old-releases.ubuntu.com/ubuntu/gutsy main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/gutsy-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/gutsy-security main restricted universe multiverse

    Read the article

  • Windows Backup failed as another backup or recovery is in progress.

    - by remunda
    Hi, i have set up backup schedule on our server. SQL Server 2008 to 01:00am on windows server 2008 R2 to 4:00am. Sql backup runs well, but system backup ends sometimes with error. The error is : http://technet.microsoft.com/en-us/library/dd364768%28WS.10%29.aspx I mean, that is caused by SQL Server, because it unexpectedly runs backup. this is from sql server log (it runs for every database): Date 4.2.2010 4:00:21 Log SQL Server (Current - 29.1.2010 23:25:00) Source spid72 Message I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. and Date 4.2.2010 4:00:24 Log SQL Server (Current - 29.1.2010 23:25:00) Source Backup Message Database backed up. Database: master, creation date(time): 2010/01/29(23:25:32), pages dumped: 370, first LSN: 859:56:37, last LSN: 859:80:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{17B91D5C-9968-4D11-A7F1-1A31523D32F0}25'}). This is an informational message only. No user action is required. My questions is: Why runs sql backup with windows backup? How can i dissolve this that errors? Thank you a lot.

    Read the article

  • Failed upgrade of PHP on Ubunutu 12.04, error: Sub-process /usr/bin/dpkg returned an error code (1)

    - by DanielAttard
    I just tried to upgrade my version of PHP on Ubuntu 12.04 and now I have messed it up. First I did this: sudo add-apt-repository ppa:ondrej/php5-oldstable Then I did this: sudo apt-get update Then finally I did this: sudo apt-get install php5 And now I am getting an error message about Sub-process /usr/bin/dpkg returned an error code (1) What have I done wrong? How can I fix this problem? Thanks. Here are the errors received: Do you want to continue [Y/n]? Y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up libapache2-mod-php5 (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libapache2-mod-php5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Setting up php5-cli (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing php5-cli (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-curl: php5-curl depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-curl (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-gd: php5-gd depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-gd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mcrypt: php5-mcrypt depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mcrypt (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mysql: php5-mysql depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mysql (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5: php5 depends on libapache2-mod-php5 (>= 5.4.28-1+deb.sury.org~precise+1) | libapache2-mod-php5filter (>= 5.4.28-1+deb.sury.org~precise+1) | php5-cgi (>= 5.4.28-1+deb.sury.org~precise+1) | php5-fpm (>= 5.4.28-1+deb.sury.org~precise+1); however: Package libapache2-mod-php5 is not configured yet. Package libapache2-mod-php5filter is not installed. Package php5-cgi is not installed. Package php5-fpm is not installed. dpkg: error processing php5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: libapache2-mod-php5 php5-cli php5-curl php5-gd php5-mcrypt php5-mysql php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • repeated failing passwords in linux security log (/var/log/secure)

    - by wallyk
    Recently, I opened up the SSH port through my firewalls (and redirecting to my server) so I could check on the (http) server while on the road. The first week or two there was nothing different. But now, three or four weeks later, I see lots of this: Mar 20 08:38:28 localhost sshd[21895]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:31 localhost sshd[21895]: Failed password for root from 207.210.101.209 port 2854 ssh2 Mar 20 15:38:31 localhost sshd[21896]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:32 localhost unix_chkpwd[21900]: password check failed for user (root) Mar 20 08:38:32 localhost sshd[21898]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:34 localhost sshd[21898]: Failed password for root from 207.210.101.209 port 3729 ssh2 Mar 20 15:38:35 localhost sshd[21899]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:36 localhost unix_chkpwd[21903]: password check failed for user (root) Mar 20 08:38:36 localhost sshd[21901]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:38 localhost sshd[21901]: Failed password for root from 207.210.101.209 port 4313 ssh2 Mar 20 15:38:38 localhost sshd[21902]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:40 localhost unix_chkpwd[21906]: password check failed for user (root) Mar 20 08:38:40 localhost sshd[21904]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:42 localhost sshd[21904]: Failed password for root from 207.210.101.209 port 4869 ssh2 Mar 20 15:38:43 localhost sshd[21905]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:44 localhost unix_chkpwd[21909]: password check failed for user (root) Mar 20 08:38:44 localhost sshd[21907]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:46 localhost sshd[21907]: Failed password for root from 207.210.101.209 port 2512 ssh2 Mar 20 15:38:47 localhost sshd[21908]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 15:38:57 localhost sshd[21912]: Connection closed by 207.210.101.209 There are about 1100 lines of these for March 20th, zero for the 19th, and 800 or so for the 18th—all related to the same IP. What does it mean? What should I do? Why isn't it chronological?

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded

    - by Tiny
    I'm trying to upgrade php 5.4.14 from php 5.4.3 in wamp server 2.2e. I have downloaded php-5.4.14-Win32-VC9-x86 (thread safe). Extracted it under C:\wamp\bin\php. Copied wampserver.conf from C:\wamp\bin\php\php5.4.3 to C:\wamp\bin\php\php5.4.14. Renamed php.ini-development to phpForApache.ini. -The port number the wamp server has been changed in the http.conf file to 8087 from its default 80. This is mentioned here though it is about upgrading from php 5.3.5 to php 5.4.0. After this, Restarting of the wamp server and services all over again has all been done and those two versions appeared in the menu php-versions (which is opened when the icon of the server is clicked). But when I attempt to enable a library like php_mysql or php_mysqli, a warning message box appears. PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded. I have also tried to removing the semicolon before them in the php.ini file but to no avail. I'm running Microsoft Windows XP Professional Version 2002, service pack 3. Where might be the problem? EDIT: I have changed extension_dir from C:\php to c:\wamp\bin\php\php5.4.14\ext\ in php.ini as the answer below indicates and the library is now loaded correctly but it says, 1045 - Access denied for user 'root'@'localhost' (using password: YES) though the user name and the password are the same as they are in MySQL in the config.inc.php file under phpmyadmin. I have also tried to restart MySQL56 service from Control Panel-Services(Local) but it keeps giving the same error. Does someone know why this happens?

    Read the article

  • error: cannot fork() for status: Resource temporarily unavailable (git)

    - by Elnaz Shahmehr
    when I want to do something: add , remove, pull , push in github, I just have this error in my terminal Thanks in advance! selnaz:iOS-Tidinfo Lnaz$ git add . error: cannot fork() for status: Resource temporarily unavailable fatal: Could not run git status --porcelain fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed Edit: selnaz:iOS-Tidinfo Lnaz$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited Edit2 selnaz:iOS-Tidinfo Lnaz$ ps xfu | wc -l ps: illegal option -- f usage: ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]] [-u] [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]] ps [-L] 0

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • Failed reverse DNS and SPF only when using Thunderbird!

    - by TruMan1
    I have a reverse DNS and SPF records correctly setup for my mail server. Sending webmail from it works perfect. The problem is when Thunderbird sends out emails, it is using the client's IP address for the hostname. I have SMTP authentication and specified my mail server's as the outgoing SMTP. Mail is being sent, but it is not "signing" the email with the mail server's IP address.. it is using the client's. Is there any way to fix this? This is the spam error I get when sending from Thunderbird: Spam: Reverse DNS Lookup, SPF_SoftFail

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

  • Plugin 'InnoDB' registration as a STORAGE ENGINE failed. On win 7

    - by NimChimpsky
    I have had to reinstall MySQL, however the service is failing to start with the above cause listed in evnt viewer. One solution is apparently to delete a couple of files prefixed with 'ib_logfile' which represent any old databases. However I do not have these files, and my service is still failing to start ... ? When I say I don't have these files I did a search using the windows search with zero results, and they are definitely not present in my mysql install directory. And I don't have the "documents and setting/appilcation data' folder referenced in link. In fact I only have only one mysql install directory, I know where that is - what do I need to delete/change ? The instance is configured OK, I ran that as administrator and it is listed in services, but the service itself fails to start Any tips, other than going over to postgresql ?

    Read the article

  • Why doesn't apache2 consistently load template fragments from memcached?

    - by Hobhouse
    I run a webserver on an ubuntu box in the rackspacecloud with django 1.0x, apache2/WSGI and memcached 1.2.2. Some of my templates make use of template fragment caching: {% load cache %} {% cache 604800 keyname %} <!-- cache: {% now "H:i, j. b" %} --> {{ my_content }} {% endcache %} When I reload apache2 everything is fine. If keyname is not set, my_content is generated and keyname is set in memcached. After that, my_content is served from memcached. My problem is that after some hours (notably less time than 604800 seconds ), apache2 seems to stop talking to memcached, and my_content is generated from scratch everytime. When this happens I can still set and get keys from memcached from my python shell. Memcached also has more than enough memory to store keys. But to get apache2 to start talking to memcached again I have to restart apache2, and then it will once again start to get the now several hours old keys from memcached. What can be the reason for this behaviour, and how do I fix it?

    Read the article

  • Why are FMS logs filled with 'play' event status code 408 for a failed webcast?

    - by Stu Thompson
    Recently we had a live webcast event go horribly wrong. I'm doing the technical post-mortem, with limited information. We know that the hardware encoder (a Digital Rapid Touch Stream Web HDI) was unable to send upstream at a sustained reliable high rate. What we don't know is if the encoder's connection was problematic (Zürich), or that of the streaming server (in Frankfurt). Unfortunately, I've got three different vendors all blaming each other (the CDN who runs the server, the on-site ISP and the on-site encoding team.) In the FMS log files I see a couple of interesting things: Zillions of Status Code 408 on play event entries for clients. Adobe's documentation stats that this "Stream stopped because client disconnected". ("Zillions" would be a ratio of 10 events for every individual IP address.) Several unpublish / (re)publish events per hour for the encoder I'd like to know if all those 408s could tell me with authority that the FMS server was starved for bandwidth, or that the encoding signal was starved (and hence the server was disconnecting clients.) Any clues?

    Read the article

  • Removing grub and getting a dual boot of Linux Mint and Win 8.1 working after failed attempt

    - by ThroatOfWinter57
    I gave the details of my problem at reddit: http://www.reddit.com/r/linuxquestions/comments/27qrun/more_specific_questions_about_failed_win_81mint/ tl;dr: I deleted the /, /home, and swap partitions I made for mint after realizing my installation couldn't be booted into and gave the space back to my windows partition. Running boot-repair on my mint live session messed stuff up. Now I can't even boot to my live session usb because grub is left over. Windows 8.1 does work though.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >