Search Results

Search found 1644 results on 66 pages for 'cpp fanatic'.

Page 39/66 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • curl problems in c++ class

    - by Danilo
    I read a few articles on c++ / curl here on stackoverflow and assembled the following. The main goal is to handle the whole request in an instance of a class -- and maybe later in a secondary thread. My problem is: "content_" seems to stay empty though its the same addr and HttpFetch.h: class HttpFetch { private: CURL *curl; static size_t handle(char * data, size_t size, size_t nmemb, void * p); size_t handle_impl(char * data, size_t size, size_t nmemb); public: std::string content_; static std::string url_; HttpFetch(std::string url); void start(); std::string data(); }; HttpFetch.cpp: HttpFetch::HttpFetch(std::string url) { curl_global_init(CURL_GLOBAL_ALL); //pretty obvious curl = curl_easy_init(); content_.append("Test"); std::cout << &content_ << "\n"; curl_easy_setopt(curl, CURLOPT_URL, &url); curl_easy_setopt(curl, CURLOPT_WRITEDATA, &content_); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &HttpFetch::handle); //curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); //tell curl to output its progress curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L); //std::cout << &content_ << "\n"; } void HttpFetch::start() { curl_easy_perform(curl); curl_easy_cleanup(curl); } size_t HttpFetch::handle(char * data, size_t size, size_t nmemb, void * p) { std::string *stuff = reinterpret_cast<std::string*>(p); stuff->append(data, size * nmemb); std::cout << stuff << "\n"; // has content from data in it! return size * nmemb; } main.cpp: #include "HttpFetch.h" int main(int argc, const char * argv[]) { HttpFetch call = *new HttpFetch("http://www.example.com"); call.start(); ::std::cout << call.content_ << "\n" } Thanks in advance

    Read the article

  • Problem installing build-essential and upgrading g++ on Ubuntu 8.04

    - by ehsanul
    I'm having some trouble with dependencies it seems, but myself don't really know how to resolve the issue. Here's the output: ~:) sudo apt-get install build-essential Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential: Depends: g++ (>= 4:4.3.1) but 4:4.2.3-1ubuntu6 is to be installed E: Broken packages ~:) sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: g++-4.3 (>= 4.3.1-1) but it is not going to be installed Depends: gcc-4.3 (>= 4.3.1-1) but it is not installable E: Broken packages ~:) Edit: I just tried aptitude instead of apt-get, as suggested. Doesn't work, had other problems: ~:) sudo aptitude install build-essential [sudo] password for ehsanul: Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Building tag database... Done The following packages are BROKEN: g++ g++-4.3 libstdc++6-4.3-dev The following packages have been automatically kept back: dpkg-dev fakeroot libdns35 libisc35 linux-libc-dev patch The following NEW packages will be automatically installed: libgmp3c2 libmpfr1ldbl The following packages have been kept back: adobe-flashplugin bind9-host dnsutils gvfs gvfs-backends gvfs-fuse libatm1 libbind9-30 libgvfscommon0 libisccc30 libisccfg30 liblwres30 libnautilus-extension1 linux-headers-2.6.24-24 linux-headers-2.6.24-24-generic linux-image-2.6.24-24-generic nautilus nautilus-data The following NEW packages will be installed: libgmp3c2 libmpfr1ldbl The following packages will be upgraded: build-essential The following partially installed packages will be configured: timidity 2 packages upgraded, 4 newly installed, 0 to remove and 24 not upgraded. Need to get 775kB/6265kB of archives. After unpacking 20.3MB will be used. The following packages have unmet dependencies: libstdc++6-4.3-dev: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libstdc++6 (>= 4.3.2-1ubuntu11) but 4.2.4-1ubuntu4 is installed. g++-4.3: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: gcc-4.3 (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libc6 (>= 2.8~20080505) but 2.7-10ubuntu4 is installed. g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc-4.3 (>= 4.3.1-1) which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Keep the following packages at their current version: build-essential [11.3ubuntu1 (hardy, now)] g++ [4:4.2.3-1ubuntu6 (hardy-updates, now)] g++-4.3 [Not Installed] libstdc++6-4.3-dev [Not Installed] Score is -9852 Accept this solution? [Y/n/q/?]

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • fatal: git-http-push-failed (return code 22)

    - by Mariusz
    Hello, that's me again. After having problem with estabilishing connection to github.com now I have a problem with next step - pushing. I need to mention, that I am novice at GIT service, and this whole Distributed Subversion Checking Systems world.. I have done git init, then git add *.h and git add *.cpp, but currently git status does not print anything in "# On branch master" section? Previously It was correctly printing whole list of added files, now this list is gone. Nextly, I have executed: git remote add origin https://github.com/mgeeky/disasm.git and error has occured after: git push origin master Username: Password: error: Cannot access URL https://github.com/mgeeky/disasm.git/, return code 22 fatal: git-http-push failed What should I do now? I've tried: git push origin Username: Password: No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'master'. Everything up-to-date But it seems to be okey.

    Read the article

  • Cisco Anyconnect Issue on HTC HD2

    - by Myles
    Hello, We've just got a HTC HD2 handset through (UK - T-mobile); and we've installed the Cisco Anyconnect client. It connects ok but then after a few seconds disconnects, then reconnects. It then keeps cycling through in this way, and at no point can we even attempt to sync Exchange! Our ASA 5510 reports; Group User IP <149.254.217.2 SVC Message: 17/ERROR: Reconnecting to recover from error.. And from the phone log; 10:56:03Debug Function: CSocketTransport::getTransportMTU File: ..\IPC\SocketTransport.cpp Line: 1058 Invoked Function: CNetInterface::GetTcpIpMTU Return Code: -32571377 (0xFE0F000F) Description: NETINTERFACE_ERROR_INTERFACE_NOT_AVAILABLE Does anyone have any advice on why it's constantly disconnecting? The phone log does suggest a lack of service; but the phone can browse the net, make calls, etc and appears to have good signal throughout. We did try the Anyconnect software on a Windows 7 PC which worked fine, no drop outs. Any help would be greatly appreciated! Thank you, Myles

    Read the article

  • SCCM Client Push FAIL - Win2000 box

    - by ajp
    Hello, When trying to install the SCCM client onto a Windows 2000 box, the install fails. The install script is run through a batch file (CONTENTS: \mdop\SCCM_client\ccmsetup.exe /mp:MDOP /logon smssitecode=MID smsslp=MDOP) hosted on a public area of the network. This script has worked for all machines (mostly Win2003 Server). I've tried enabling all the common services it requires (BITS, IIS Admin, Windows Installer), but it still only runs for a second or two then quits. Here's the piece of the log file where it errors out: [LOG[Couldn't get directory list for directory 'http://MDOP/CCM_Client/ClientPatch'. This directory may not exist.]LOG]! time="13:55:53.618+300" date="06-30-2009" component="ccmsetup" context="" type="0" thread="1676" file="ccmsetup.cpp:6054" Full Log: http://paste-it.net/public/gb11732/

    Read the article

  • Apache 2 with Weblogic Plug-in Redirection, original location still requested to backend

    - by Edo
    We're trying to setup an SSL server in front of a Weblogic server using Apache as the SSL provider. Here's what's inside of our httpd.conf: <Location /original> SetHandler weblogic-handler WebLogicHost 10.11.1.1 WebLogicPort 8700 PathTrim /original PathPrepend /destination ConnectTimeoutSecs 60 </Location> <Location /destination> SetHandler weblogic-handler WebLogicHost 10.11.1.1 WebLogicPort 8700 ConnectTimeoutSecs 60 </Location> This setup works mostly, but in the ssl_error_log file there're these entries: [Wed Aug 11 14:59:00 2010] [error] [client xxx.xxx.xxx.xxx] ap_proxy: trying GET /original at backend host '10.11.1.1/8700; got exception 'CONNECTION_REFUSED [os error=0, line 1739 of ../nsapi/URL.cpp]: Error connecting to host 10.11.1.1:8700' The weird thing is, the redirection still works, but these annoying entries still shows up. Anyone can point out where did we go wrong? Thanks.

    Read the article

  • How to install matplotlib on OS X?

    - by Paperflyer
    I want to install matplotlib on OS X. If possible, using homebrew. I installed Python 2.7.1 using brew install python, I modified my path to use it I installed pip using brew install pip I installed numpy 1.5.1 using pip install numpy I installed scipy 0.8.0 using pip install scipy This is where it gets hairy. pip install matplotlib will fetch the wrong version of matplotlib, which is incompatible with the recent version of numpy. The solution is to fetch the correct version of matplotlib manually: pip install -f http://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.0.1/matplotlib-1.0.1.tar.gz matplotlib But, that version fails to compile since it can't find the freetype headers: In file included from src/ft2font.cpp:1: src/ft2font.h:14:22: error: ft2build.h: No such file or directory These headers are actually installed in /usr/X11/include as part of the X11 developer tools. So, how can I make matplotlib use these headers?

    Read the article

  • Vim snippets pop-up but don't complete

    - by raoulcousins
    I'm using spf13-vim in Fedora. When I try to use a snippet, I get options that pop up but when I don't know how to actually insert the snippet. For example, if I type for in a .cpp file, I get four options. I'm assuming I'm supposed to either hit tab or enter to insert them but neither works. I'm not even sure which vim plugin it is that the snippets come from (snipmate? neocomplcache?).Screenshot. The same problem happens in vim and gvim.

    Read the article

  • nsclient++ intermittent connection refused on same subnet

    - by jshin47
    I have setup Nagios and nsclient++ on a number of my Windows servers. They are all in the same subnet so no routing or firewall stuff is taking place in between the endpoints, and I have verified that the firewalls on the servers are not causing trouble. The problem is that the scheduled checks sometimes fail with "connection refused" and sometimes work! It is a frustrating problem to resolve because I do not know what to look for. One place I did look is in the nsclient++ logs, where I am seeing this recurring error: ...\trunk\modules\CheckSystem\PDHCollector.cpp:148: Failed to query performance counters: \238... This sounds promising, but I couldn't find much on Google about this error as it pertains to NSClient++

    Read the article

  • Issues running java in Solaris 9 container

    - by Matthew Watson
    Hi, I have a solaris 9 container built from a physical server using flarcreate. Everything seems fine, except when trying to trying to run any "java -server" process it fails with the following error This is on a Sunfire T1000 machine running Solaris 10 10/09 s10s_u8wos_08a SPARC Running jdk1.5.0_15 Exception java.lang.OutOfMemoryError: requested -4 bytes for size_t in /BUILD_AREA/jdk1.5.0_15/hotspot/src/os/solaris/vm/os_solaris.cpp. Out of swap space? As far as I can tell I'm not actually out of swap space. Running java in client mode works without a problem. Googles only suggestion is related to x86. Any suggestions? Thanks.

    Read the article

  • Skydrive does not synchronize with one device but synchronizes successfully with another one.

    - by Hobbes
    I have an Outlook id. I also have Skydrive installed on my personal laptop (Windows 7 x64). The folders and files synchronize successfully. Today, I installed Skydrive on my office PC and logged in with the same id as above but it does not synchronize any of my folders or files, other than the default ones. When I view the logs (filename: SkyDrive.exe.reg.2012-08-09-150239.654.log), I see the following entry. 09-17-12,13:31:35.075,45a,146c,0,PAL,systeminformationhelper.cpp(661),0,0018E4F8,CRIT,The registry key to block Remote Access is not found.,System Error Code=0x2 Any idea as to what could be the problem?

    Read the article

  • WMIC returns error when querying product

    - by Stu
    I'm trying to automate the installation of an MSI on my server, however before the installation can go ahead I need to uninstall the previous version from the server. Searching on the internet I've found that WMIC is the tool required but there seems to be a problem with the setup of WMI on the server. Running the following command gives errors: command promptwmic then inside the tool /trace:on product get name This returns a long string of successes and one failure: FAIL: IEnumWbemClassObject->Next(WBEM_INFINITE, 1, -, -) Line: 396 File: d:\nt\admin\wmi\wbem\tools\wmic\execengine.cpp Node - ENTECHORELDEV ERROR: Code = 0x80041010 Description = The specified class is not valid. Facility = WMI I'm trying to run this on a standard install of Windows Server 2003 R2 with administrator privelages. Thanks Stu

    Read the article

  • How can I do an "insert" statement from MS SQL -> MYSQL using linked servers

    - by bvandrunen
    Is it possible to do an "insert" statement through linked servers. I know it is possible by using MSDTC...but does this work between MS SQL and MYSQL? Any help would be greatly appreciated. As of right now...I can update and select between the 2 databases but it gives me an error when I try to run an insert statement. OLE DB provider "MSDASQL" for linked server "******" returned message "Query-based insertion or updating of BLOB values is not supported.". Msg 7343, Level 16, State 2, Line 1 The OLE DB provider "MSDASQL" for linked server "******" could not INSERT INTO table "[*********]...[******_options]". Location: memilb.cpp:1493 Expression: (*ppilb)-m_cRef == 0 SPID: 76 Process ID: 1644

    Read the article

  • System Center Configuration Manager 2007 - Debugging Client Installs

    - by Dayton Brown
    Hi All: Having an issue installing the CCMsetup client on desktops. The CCMSetup makes it to the PC, files are there, it gets added to the services for automatic start, it starts, but quits almost instantly. Logs on the desktop show a entry like this. <![LOG[Failed to successfully complete HTTP request. (StatusCode at WinHttpQueryHeaders: 404)]LOG]!><time="14:28:51.183+240" date="06-11-2009" component="ccmsetup" context="" type="3" thread="2388" file="ccmsetup.cpp:5808"> What am I missing? EDIT: Firewall is off on both client and server.

    Read the article

  • Microsoft Visual C++ Runtime error when viewing JPEG files in Explorer

    - by Prince Kitts
    I am getting the following error each time I view JPG files in the windows explorer in Detailed view. Microsoft Visual C++ Runtime Library Assertion failed! Program: C:\Windows\Explorer.EXE File: multimedia\photos\metadatahandler\util.cpp Line: 4706 Expression: MinutesFraction < 1.0 For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts (Press Retry to debug the application - JIT must be enabled) These pictures were taken from a Nikon Coolpix AW110 camera. I think it is related to some EXIF data that is a date/time. I have tried reinstalling the Visual C++ 2013 and 2008 Run time library and restarting and the problem is still there. I uploaded a sample file here: https://anonfiles.com/file/e346e174708714a88d372e295265a03f (click topmost download button and not the ad below it or just save the opened image)

    Read the article

  • gcc sandboxing tool - AppArmor / CHROOT jail on Ubuntu 12.04

    - by StuR
    We have a Node application as the front end to a C++ sandboxing tool, which compiles code using gcc and outputs the result to the browser. e.g. exec("gcc -o /tmp/test /tmp/test.cpp", function (error, stdout, stderr) { if(!stderr) { execFile('/tmp/test', function(error, stdout, stderr) {}); } }); This works fine. However, as you can imagine this is a security nightmare if it were to be made public - so I was thinking of two options to protect my stack: 1) A CHROOT jail - but this in itself wouldn't be enough to prevent directory traversal / file access. 2) AppArmor ? So my question is really, how could I protect my stack from any nasties that could come from: A) Compiling unknown code using gcc B) Executing the compiled code

    Read the article

  • Error while loading shared libraries - libwebsock

    - by kittyPL
    Im trying to setup libwebsock, simple C websocket library. I followed the installation procedure from INSTALL file, everything went fine. Im able to compile test program given in the examples. But when I want to run my executable, wild error appears: ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory I checked /usr/local/lib twice, libwebsock.so.1 exists and is doing very well. I also tried copying the lib to the echo folder (so its placed next to binary), still same error. It's quite funny for me: shadowz@Ubu:~/WebSocket$ ls echo echo.c echo.cpp libwebsock.so.1 shadowz@Ubu:~/WebSocket$ ./echo ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory Any suggestions? Im running out of ideas...

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • How can I enforce directory space limits in a OpenVPZ system?

    - by George
    The title says it all. I have some programs on a server (centos4-openvz) that use a directory as temp directory - but pay no attention to the size it grows. I want to enforcee a limit, like this folder cannot exceed 300mb. I would use quota but OpenVZ does not support loop devices to mount a file as such. Any other solutions? (apart from scripting a periodic delete of files in the directory). Editing the application's code to implement such a functionality is not entirely out of the question - if it can be done easily and no other ways exist.. Its written in cpp - but I don't know how to implement it.

    Read the article

  • Error on error log

    - by Ryan Murphy
    I am trying to use zend framework 2, i follow these instructions on centos6 via ssh. http://framework.zend.com/manual/2.0/en/user-guide/skeleton-application.html and when trying to start my website up, it gives an error, i go to the error log and i get this. [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] SoftException in Application.cpp:357: UID of script "/home/mydomain/public_html/public/index.php" is smaller than min_uid [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] Premature end of script headers: index.php What do they mean, how I fix them?

    Read the article

  • How to set file associations and explorer options in one go?

    - by jokoon
    This has been itching me for quite a long time. When I come on a new computer with any version of Windows, I need to: un-hide file extensions set explorer default view to list instead of icons set the file extensions like h c cpp so they don't open Visual C++ (which can take up to 20 seconds!) each time I double click on them, opening them with something like Notepad++ instead. Isn't there some program to quickly set thoses options to something I want, like some standalone exe, or some generic registry file I can execute when I come on a new machine? I'm a developer and I can't believe I'm wasting so much time on those &*$%@# things.

    Read the article

  • SCCM Client Push FAIL - Win2000 box

    - by ajp
    When trying to install the SCCM client onto a Windows 2000 box, the install fails. The install script is run through a batch file (CONTENTS: \mdop\SCCM_client\ccmsetup.exe /mp:MDOP /logon smssitecode=MID smsslp=MDOP) hosted on a public area of the network. This script has worked for all machines (mostly Win2003 Server). I've tried enabling all the common services it requires (BITS, IIS Admin, Windows Installer), but it still only runs for a second or two then quits. Here's the piece of the log file where it errors out: [LOG[Couldn't get directory list for directory 'http://MDOP/CCM_Client/ClientPatch'. This directory may not exist.]LOG]! time="13:55:53.618+300" date="06-30-2009" component="ccmsetup" context="" type="0" thread="1676" file="ccmsetup.cpp:6054" Full Log: http://paste-it.net/public/gb11732/

    Read the article

  • How I can compile this C++ code that fails with "'::main' must return 'int'"? [migrated]

    - by krisu
    I have tried to find a solution that make batch file start flashing on taskbar and only good solution was this post on Stack Overflow. But I can't compile the code with WinGW or anything else to EXE, only getting this error: hello.cpp:6:32: error: '::main' must return 'int' Right now, I'm using TDM-GCC to compile code, because it's bit better... Can somebody give me code that actually works or even better compile it to EXE already? P.S. Even more better if somebody could compile this Delphi code, because I can't find any software that's free.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >