Search Results

Search found 21 results on 1 pages for 'paulh'.

Page 1/1 | 1 

  • Debian 5.0.8 won't detect my ethernet adapter

    - by PaulH
    I'm installing Debian 5.0.8 on a Dell Latitude E6410 using the network install x86 CD. Unfortunately, it won't detect my Ethernet adapter. Specifically, I get the error: No Ethernet card was detected. If you know the name of the driver needed by your Ethernet card, you can select it from the list. lspci shows: 00:19.0 Ethernet controller: Intel Corporation Device 10ea (rev 05) That should correspond to the e100e driver. But, if I select it from the list, the install CD still fails to detect it. Ubuntu 10.10 and Windows 7 are able to use this device with no problem. Does anybody know what I can do to make Debian use my Ethernet adapter? Thanks, PaulH

    Read the article

  • Getting the PC value in ARM assembly

    - by PaulH
    I have a Windows Mobile 6 ARMV4I project where I would like to get the value of the program counter. The function is declared like this: extern "C" unsigned __int32 GetPC(); My assembly code looks like this: GetPC FUNCTION EXPORT GetPC ldr r0, [r15] ; load the PC value in to r0 mov pc, lr ; return the value of r0 ENDFUNC But, when I call the GetPC() function, I get the same number every time. So, I'm assuming my assembly isn't doing what I think it's doing. Can anybody point out what I may be doing wrong? Thanks, PaulH

    Read the article

  • Stopping Windows Mobile 6.5 tab reordering

    - by PaulH
    I have a C++ Visual Studio 2008 Windows Mobile 6.5 application that uses a tab control. I've noticed that depending on how careful you are with the stylus, when using the tab control you can accidentally re-order the tabs. It's difficult to do deliberately, but it's very easy to do when you're not trying. I assume this is a new "feature" of Windows Mobile 6.5 as it doesn't happen in Windows Mobile 6.1 with the same code. Is there a window style or something I can set that will lock the tab order such that people don't accidentally re-arrange them? Also, is there an MSDN page that describes this behavior and how it is supposed to work? I've looked, but have come up empty. Thanks, PaulH

    Read the article

  • inserting std::strings in to a std::map

    - by PaulH
    I have a program that reads data from a file line-by-line. I would like to copy some substring of that line in to a map as below: std::map< DWORD, std::string > my_map; DWORD index; // populated with some data char buffer[ 1024 ]; // populated with some data char* element_begin; // points to some location in buffer char* element_end; // points to some location in buffer > element_begin my_map.insert( std::make_pair( index, std::string( element_begin, element_end ) ) ); This std::map<>::insert() operation takes a long time (It doubles the file parsing time). Is there a way to make this a less expensive operation? Thanks, PaulH

    Read the article

  • copying program arguments to a whitespace separated std::string

    - by PaulH
    I have a Visual Studio 2008 C++ application where I would like to copy all of program arguments in to a string separated by a whitespace " ". i.e., if my program is called as foo.exe \Program Files, then my folder string below would contain \Program Files Below is an example of what I'm doing now. I was wondering if there was a shorter or easier method of doing this. Is there an easy way to eliminate the std::wstringstream variable? int _tmain( int argc, _TCHAR* argv[] ) { std::wstringstream f; std::copy( argv + 1, argv + argc, std::ostream_iterator< std::wstring, wchar_t >( f, L" " ) ); std::wstring folder = f.str(); // ensure the folder begins with a backslash if( folder[ 0 ] != L'\\' ) folder.insert( 0, 1, L'\\' ); // remove the trailing " " character from the end added by the std::copy() above if( *folder.rbegin() == L' ') folder.erase( folder.size() - 1 ); // ... } Thanks, PaulH

    Read the article

  • TAPI lineGetAddressID() fails with LINEERR_INVALADDRESS

    - by PaulH
    I have a windows mobile 6 application using TAPI 2.0. lineGetAddressID() is needed to get the address identifier used by several calls in the telephone api, but I can't get it to work. I have tried the following to no avail: HLINE line; // valid handle from lineOpen(); DWORD addr_id = 0; result = ::lineGetAddressID( line, &addr_id, LINEADDRESSMODE_DIALABLEADDR, L"1234", 5 ); result = ::lineGetAddressID( line, &addr_id, LINEADDRESSMODE_DIALABLEADDR, L"5551234", 8 ); result = ::lineGetAddressID( line, &addr_id, LINEADDRESSMODE_DIALABLEADDR, L"1115551234", 11 ); result = ::lineGetAddressID( line, &addr_id, LINEADDRESSMODE_DIALABLEADDR, L"11115551234", 12 ); All of them return LINEERR_INVALADDRESS. Can anybody point out what I may be doing wrong? As a side question, how can I programmaticly get the address? It appears in the LINEADDRESSCAPS structure returned by lineGetAddressCaps(), but that requires an address identifier (which would need to come from lineGetAddressID(), which requires an address...). Note: I realize I could use 0 as the address ID and it will probably work, but I have no guarantee it will work for every platform. I would like to get this solved 'right'. Thanks, PaulH

    Read the article

  • boost::binding that which is already bound

    - by PaulH
    I have a Visual Studio 2008 C++ application that does something like this: template< typename Fcn > inline void Bar( Fcn fcn ) // line 84 { fcn(); }; template< typename Fcn > inline void Foo( Fcn fcn ) { // this works fine Bar( fcn ); // this fails to compile boost::bind( Bar, fcn )(); }; void main() { SYSTEM_POWER_STATUS_EX status = { 0 }; Foo( boost::bind( ::GetSystemPowerStatusEx, &status, true ) ); // line 160 } *The call to GetSystemPowerStatusEx() is just for demonstration. Insert your favorite call there and the behavior is the same. When I go to compile this, I get 84 errors. I won't post them all unless asked, but they start with this: 1>.\MyApp.cpp(99) : error C2896: 'boost::_bi::bind_t<_bi::dm_result<MT::* ,A1>::type,boost::_mfi::dm<M,T>,_bi::list_av_1<A1>::type> boost::bind(M T::* ,A1)' : cannot use function template 'void Bar(Fcn)' as a function argument 1> .\MyApp.cpp(84) : see declaration of 'Bar' 1> .\MyApp.cpp(160) : see reference to function template instantiation 'void Foo<boost::_bi::bind_t<R,F,L>>(Fcn)' being compiled 1> with 1> [ 1> R=BOOL, 1> F=BOOL (__cdecl *)(PSYSTEM_POWER_STATUS_EX,BOOL), 1> L=boost::_bi::list2<boost::_bi::value<_SYSTEM_POWER_STATUS_EX *>,boost::_bi::value<bool>>, 1> Fcn=boost::_bi::bind_t<BOOL,BOOL (__cdecl *)(PSYSTEM_POWER_STATUS_EX,BOOL),boost::_bi::list2<boost::_bi::value<_SYSTEM_POWER_STATUS_EX *>,boost::_bi::value<bool>>> 1> ] If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • LocalAlloc and LocalRealloc usage

    - by PaulH
    I have a Visual Studio 2008 C++ Windows Mobile 6 application where I'm using a FindFirst() / FindNext() style API to get a collection of items. I do not know how many items will be in the list ahead of time. So, I would like to dynamically allocate an array for these items. Normally, I would use a std::vector<>, but, for other reasons, that's not an option for this application. So, I'm using LocalAlloc() and LocalReAlloc(). What I'm not clear on is if this memory should be marked fixed or moveable. The application runs fine either way. I'm just wondering what's 'correct'. int count = 0; INFO_STRUCT* info = ( INFO_STRUCT* )LocalAlloc( LHND, sizeof( INFO_STRUCT ) ); while( S_OK == GetInfo( &info[ count ] ) { ++count; info = ( INFO_STRUCT* )LocalRealloc( info, sizeof( INFO_STRUCT ) * ( count + 1 ), LHND ); } if( count > 0 ) { // use the data in some interesting way... } LocalFree( info ); Thanks, PaulH

    Read the article

  • Locate modules by stack address

    - by PaulH
    I have a Winodws Mobile 6.1 application running on an ARMV4I processor. Given a stack address (from unwinding an exception), I like to determine what module owns that address. Using the ToolHelpAPI, I'm able to determine most modules using the following method: HANDLE snapshot = ::CreateToolhelp32Snapshot( TH32CS_SNAPMODULE | TH32CS_GETALLMODS, 0 ); if( INVALID_HANDLE_VALUE != snapshot ) { MODULEENTRY32 mod = { 0 }; mod.dwSize = sizeof( mod ); if( ::Module32First( snapshot, &mod ) ) { do { if( stack_address > (DWORD)mod.modBaseAddr && stack_address < (DWORD)( mod.modBaseAddr + mod.modBaseSize ) ) { // Found the module! // offset = stack_address - mod.modBaseAddr break; } } while( ::Module32Next( snapshot, &mod ) ); } ::CloseToolhelp32Snapshot( snapshot ); } But, I don't always seem to be able to find a module that matches an address. For example: stack address module offset 0x03f65bd8 coredll.dll + 0x0001bbd8 0x785cab1c mylib.dll + 0x0002ab1c 0x785ca9e8 mylib.dll + 0x0002a9e8 0x785ca0a0 mylib.dll + 0x0002a0a0 0x785c8144 mylib.dll + 0x00028144 0x3001d95c ??? 0x3001dd44 ??? 0x3001db90 ??? 0x03f88030 coredll.dll + 0x0003e030 0x03f8e46c coredll.dll + 0x0004446c 0x801087c4 ??? 0x801367b4 ??? 0x8010ce78 ??? 0x801086dc ??? 0x03f8e588 coredll.dll + 0x00044588 0x785a56a4 mylib.dll + 0x000056a4 0x785bdd60 mylib.dll + 0x0001dd60 0x785bbd0c mylib.dll + 0x0001bd0c 0x785bdb38 mylib.dll + 0x0001db38 0x3001db20 ??? 0x3001dc40 ??? 0x3001a8a4 ??? 0x3001a79c ??? 0x03f67348 coredll.dll + 0x0001d348 Where do I find those stack addresses that are missing? Any suggestions? Thanks, PaulH

    Read the article

  • Get an array of structures from native dll to c# application

    - by PaulH
    I have a C# .NET 2.0 CF project where I need to invoke a method in a native C++ DLL. This native method returns an array of type TableEntry. At the time the native method is called, I do not know how large the array will be. How can I get the table from the native DLL to the C# project? Below is effectively what I have now. // in C# .NET 2.0 CF project [StructLayout(LayoutKind.Sequential)] public struct TableEntry { [MarshalAs(UnmanagedType.LPWStr)] public string description; public int item; public int another_item; public IntPtr some_data; } [DllImport("MyDll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Auto)] public static extern bool GetTable(ref TableEntry[] table); SomeFunction() { TableEntry[] table = null; bool success = GetTable( ref table ); // at this point, the table is empty } // In Native C++ DLL std::vector< TABLE_ENTRY > global_dll_table; extern "C" __declspec(dllexport) bool GetTable( TABLE_ENTRY* table ) { table = &global_dll_table.front(); return true; } Thanks, PaulH

    Read the article

  • implementing a read/write field for an interface that only defines read

    - by PaulH
    I have a C# 2.0 application where a base interface allows read-only access to a value in a concrete class. But, within the concrete class, I'd like to have read/write access to that value. So, I have an implementation like this: public abstract class Base { public abstract DateTime StartTime { get; } } public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } internal set { start_time_ = value; } } } But, this gives me the error: Foo.cs(200,22): error CS0546: 'Foo.StartTime.set': cannot override because 'Base.StartTime' does not have an overridable set accessor I don't want the base class to have write access. But, I do want the concrete class to provide read/write access. Is there a way to make this work? Thanks, PaulH Unfortunately, Base can't be changed to an interface as it contains non-abstract functionality also. Something I should have thought to put in the original problem description. public abstract class Base { public abstract DateTime StartTime { get; } public void Buzz() { // do something interesting... } } My solution is to do this: public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } } internal void SetStartTime { start_time_ = value; } } It's not as nice as I'd like, but it works.

    Read the article

  • Ping broadcast on Win XP SP3

    - by PaulH
    I'm trying to ping the broadcast address 255.255.255.255 on WinXP SP3. If I use the command line, I get host error: C:\>ping 255.255.255.255 Ping request could not find host 255.255.255.255. Please check the name and try again. If I try a C++ program using the iphlpapi, IcmpSendEcho() fails and GetLastError returns 11010 IP_REQ_TIMED_OUT. HANDLE h = ::IcmpCreateFile(); IPAddr broadcast = inet_addr( "255.255.255.255" ); BYTE payload[ 32 ] = { 0 }; IP_OPTION_INFORMATION option = { 255, 0, 0, 0, 0 }; // a buffer with room for 32 replies each containing the full payload std::vector< BYTE > replies( 32 * ( sizeof( ICMP_ECHO_REPLY ) + 32 ) ); DWORD res = ::IcmpSendEcho( h, broadcast, payload, sizeof( payload ), &option, &replies[ 0 ], replies.size(), 1000 ); ::IcmpCloseHandle( h ); I can ping the local broadcast 192.168.0.255 with no problem. What do I need to do to ping the global broadcast? Thanks, PaulH

    Read the article

  • which is better: a lying copy constructor or a non-standard one?

    - by PaulH
    I have a C++ class that contains a non-copyable handle. The class, however, must have a copy constructor. So, I've implemented one that transfers ownership of the handle to the new object (as below) class Foo { public: Foo() : h_( INVALID_HANDLE_VALUE ) { }; // transfer the handle to the new instance Foo( const Foo& other ) : h_( other.Detach() ) { }; ~Foo() { if( INVALID_HANDLE_VALUE != h_ ) CloseHandle( h_ ); }; // other interesting functions... private: /// disallow assignment const Foo& operator=( const Foo& ); HANDLE Detach() const { HANDLE h = h_; h_ = INVALID_HANDLE_VALUE; return h; }; /// a non-copyable handle mutable HANDLE h_; }; // class Foo My problem is that the standard copy constructor takes a const-reference and I'm modifying that reference. So, I'd like to know which is better (and why): a non-standard copy constructor: Foo( Foo& other ); a copy-constructor that 'lies': Foo( const Foo& other ); Thanks, PaulH

    Read the article

  • std::ostream interface to an OLE IStream

    - by PaulH
    I have a Visual Studio 2008 C++ application using IStreams. I would like to use the IStream connection in a std::ostream. Something like this: IStream* stream = /*create valid IStream instance...*/; IStreamBuf< WIN32_FIND_DATA > sb( stream ); std::ostream os( &sb ); WIN32_FIND_DATA d = { 0 }; // send the structure along the IStream os << d; To accomplish this, I've implemented the following code: template< class _CharT, class _Traits > inline std::basic_ostream< _CharT, _Traits >& operator<<( std::basic_ostream< _CharT, _Traits >& os, const WIN32_FIND_DATA& i ) { const _CharT* c = reinterpret_cast< const _CharT* >( &i ); const _CharT* const end = c + sizeof( WIN32_FIND_DATA ) / sizeof( _CharT ); for( c; c < end; ++c ) os << *c; return os; } template< typename T > class IStreamBuf : public std::streambuf { public: IStreamBuf( IStream* stream ) : stream_( stream ) { setp( reinterpret_cast< char* >( &buffer_ ), reinterpret_cast< char* >( &buffer_ ) + sizeof( buffer_ ) ); }; virtual ~IStreamBuf() { sync(); }; protected: traits_type::int_type FlushBuffer() { int bytes = std::min< int >( pptr() - pbase(), sizeof( buffer_ ) ); DWORD written = 0; HRESULT hr = stream_->Write( &buffer_, bytes, &written ); if( FAILED( hr ) ) { return traits_type::eof(); } pbump( -bytes ); return bytes; }; virtual int sync() { if( FlushBuffer() == traits_type::eof() ) return -1; return 0; }; traits_type::int_type overflow( traits_type::int_type ch ) { if( FlushBuffer() == traits_type::eof() ) return traits_type::eof(); if( ch != traits_type::eof() ) { *pptr() = ch; pbump( 1 ); } return ch; }; private: /// data queued up to be sent T buffer_; /// output stream IStream* stream_; }; // class IStreamBuf Yes, the code compiles and seems to work, but I've not had the pleasure of implementing a std::streambuf before. So, I'd just like to know if it's correct and complete. Thanks, PaulH

    Read the article

  • Deriving from a component and implementing IDisposable properly

    - by PaulH
    I have a Visual Studio 2008 C# .NET 2.0 CF project with an abstract class derived from Component. From that class, I derive several concrete classes (as in my example below). But, when I go to exit my Form, though the Form's Dispose() member is called and components.Dispose() is called, my components are never disposed. Can anybody suggest how I can fix this design? public abstract class SomeDisposableComponentBase : Component { private System.ComponentModel.IContainer components; protected SomeDisposableComponentBase() { Initializecomponent(); } protected SomeDisposableComponentBase(IContainer container) { container.Add(this); Initializecomponent(); } private void InitializeComponent() { components = new System.ComponentModel.Container(); } protected abstract void Foo(); #region IDisposable Members bool disposed_; /// Warning 60 CA1063 : Microsoft.Design : Ensure that 'SomeDisposableComponentBase.Dispose()' is declared as public and sealed.* public void Dispose() { // never called Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { // never called if (!disposed_) { if (disposing && (components != null)) { components.Dispose(); } disposed_ = true; } base.Dispose(disposing); } #endregion } public SomeDisposableComponent : SomeDisposableComponentBase { public SomeDisposableComponent() : base() { } public SomeDisposableComponent(IContainer container) : base(container) { } protected override void Foo() { // Do something... } protected override void Dispose(bool disposing) { // never called base.Dispose(disposing); } } public partial class my_form : Form { private SomeDisposableComponentBase d_; public my_form() { InitializeComponent(); if (null == components) components = new System.ComponentModel.Container(); d_ = new SomeDisposableComponent(components); } /// exit button clicked private void Exit_Click(object sender, EventArgs e) { this.Close(); } /// from the my_form.designer.cs protected override void Dispose(bool disposing) { if (disposing && (components != null)) { // this function is executed as expected when the form is closed components.Dispose(); } base.Dispose(disposing); } } *I note that FX-Cop is giving me a hint here. But, if I try to declare that function as sealed, I get the error: error CS0238: 'SomeDisposableComponentBase.Dispose()' cannot be sealed because it is not an override Declaring that function an override leads to: 'SomeDisposableComponentBase.Dispose()': cannot override inherited member 'System.ComponentModel.Component.Dispose()' because it is not marked virtual, abstract, or override Thanks, PaulH

    Read the article

  • Longitudinal Redundancy Check fails

    - by PaulH
    I have an application that decodes data from a magnetic stripe reader. But, I'm having difficulty getting my calculated LRC check byte to match the one on the cards. If I were to grab 3 cards each with 3 tracks, I would guess the algorithm below would work on 4 of the 9 tracks in those cards. The algorithm I'm using looks like this (C#): private static char GetLRC(string s, int start, int end) { int result = 0; for (int i = start; i <= end; i++) { result ^= Convert.ToByte(s[i]); } return Convert.ToChar(result); } This is an example of track 3 data that fails the check. On this card, track 2 matched, but track 1 also failed. 0 1 2 3 4 5 6 7 8 9 A B C D E F 00 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 10 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 20 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 30 8 8 8 9 9 9 9 9 9 9 9 9 9 0 0 0 40 0 0 0 0 0 0 0 1 2 3 4 1 1 1 1 1 50 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 60 3 3 3 3 3 3 3 3 The sector delimiter is ';' and it ends with a '?'. The LRC byte from this track is 0x30. Unfortunately, the algorithm above computes an LRC of 0x00 per the following calculation (apologies for its length. I want to be thorough): 00 ^ 3b = 3b ';' 3b ^ 33 = 08 08 ^ 34 = 3c 3c ^ 34 = 08 08 ^ 34 = 3c 3c ^ 34 = 08 08 ^ 34 = 3c 3c ^ 34 = 08 08 ^ 34 = 3c 3c ^ 34 = 08 08 ^ 34 = 3c 3c ^ 34 = 08 08 ^ 35 = 3d 3d ^ 35 = 08 08 ^ 35 = 3d 3d ^ 35 = 08 08 ^ 35 = 3d 3d ^ 35 = 08 08 ^ 35 = 3d 3d ^ 35 = 08 08 ^ 35 = 3d 3d ^ 35 = 08 08 ^ 36 = 3e 3e ^ 36 = 08 08 ^ 36 = 3e 3e ^ 36 = 08 08 ^ 36 = 3e 3e ^ 36 = 08 08 ^ 36 = 3e 3e ^ 36 = 08 08 ^ 36 = 3e 3e ^ 36 = 08 08 ^ 37 = 3f 3f ^ 37 = 08 08 ^ 37 = 3f 3f ^ 37 = 08 08 ^ 37 = 3f 3f ^ 37 = 08 08 ^ 37 = 3f 3f ^ 37 = 08 08 ^ 37 = 3f 3f ^ 37 = 08 08 ^ 38 = 30 30 ^ 38 = 08 08 ^ 38 = 30 30 ^ 38 = 08 08 ^ 38 = 30 30 ^ 38 = 08 08 ^ 38 = 30 30 ^ 38 = 08 08 ^ 38 = 30 30 ^ 38 = 08 08 ^ 39 = 31 31 ^ 39 = 08 08 ^ 39 = 31 31 ^ 39 = 08 08 ^ 39 = 31 31 ^ 39 = 08 08 ^ 39 = 31 31 ^ 39 = 08 08 ^ 39 = 31 31 ^ 39 = 08 08 ^ 30 = 38 38 ^ 30 = 08 08 ^ 30 = 38 38 ^ 30 = 08 08 ^ 30 = 38 38 ^ 30 = 08 08 ^ 30 = 38 38 ^ 30 = 08 08 ^ 30 = 38 38 ^ 30 = 08 08 ^ 31 = 39 39 ^ 32 = 0b 0b ^ 33 = 38 38 ^ 34 = 0c 0c ^ 31 = 3d 3d ^ 31 = 0c 0c ^ 31 = 3d 3d ^ 31 = 0c 0c ^ 31 = 3d 3d ^ 31 = 0c 0c ^ 31 = 3d 3d ^ 31 = 0c 0c ^ 31 = 3d 3d ^ 31 = 0c 0c ^ 32 = 3e 3e ^ 32 = 0c 0c ^ 32 = 3e 3e ^ 32 = 0c 0c ^ 32 = 3e 3e ^ 32 = 0c 0c ^ 32 = 3e 3e ^ 32 = 0c 0c ^ 32 = 3e 3e ^ 32 = 0c 0c ^ 33 = 3f 3f ^ 33 = 0c 0c ^ 33 = 3f 3f ^ 33 = 0c 0c ^ 33 = 3f 3f ^ 33 = 0c 0c ^ 33 = 3f 3f ^ 33 = 0c 0c ^ 33 = 3f 3f ^ 3f = 00 '?' If anybody can point out how to fix my algorithm, I would appreciate it. Thanks, PaulH

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • What does Embedded SATA Controller : ATA mean?

    - by paulH
    I have a Poweredge R510 server with a PERC H700 Integrated RAID controller that is exhibiting slower than expected disk speeds (RAID 1 and RAID 10 arrays) and I'm looking at the configuration of the server. Running the command omreport chassis biossetup on the server shows me the following configuration setting: Embedded SATA Controller : ATA I can also see that the possible options for this setting are: off | ata | qdma | raid I've been looking online to find out what this setting means and what the various options refer to but I've been unable to find anything particularly helpful, so I was hoping that somebody here could help to enlighten me. Thanks, Paul.

    Read the article

  • Android - How do I load a contact Photo?

    - by PaulH
    I'm having trouble loading a photo for a contact in Android. I've googled for an answer, but so far have come up empty. Does anyone have an example of querying for a Contact, then loading the Photo? So, given a contactUri which comes from an Activity result called using startActivityForResult(new Intent(Intent.ACTION_PICK,ContactsContract.CommonDataKinds.Phone.CONTENT_URI),PICK_CONTACT_REQUEST) is: content://com.android.contacts/data/1557 The loadContact(..) works fine. However when I call the getPhoto(...) method, I get a null value for the photo InputStream. It is also confusing because the URI values are different. The contactPhotoUri evaluates to: content://com.android.contacts/contacts/1557 See the comments inline in the code below. class ContactAccessor { /** * Retrieves the contact information. */ public ContactInfo loadContact(ContentResolver contentResolver, Uri contactUri) { //contactUri --> content://com.android.contacts/data/1557 ContactInfo contactInfo = new ContactInfo(); // Load the display name for the specified person Cursor cursor = contentResolver.query(contactUri, new String[]{Contacts._ID, Contacts.DISPLAY_NAME, Phone.NUMBER, Contacts.PHOTO_ID}, null, null, null); try { if (cursor.moveToFirst()) { contactInfo.setId(cursor.getLong(0)); contactInfo.setDisplayName(cursor.getString(1)); contactInfo.setPhoneNumber(cursor.getString(2)); } } finally { cursor.close(); } return contactInfo; // <-- returns info for contact } public Bitmap getPhoto(ContentResolver contentResolver, Long contactId) { Uri contactPhotoUri = ContentUris.withAppendedId(Contacts.CONTENT_URI, contactId); // contactPhotoUri --> content://com.android.contacts/contacts/1557 InputStream photoDataStream = Contacts.openContactPhotoInputStream(contentResolver,contactPhotoUri); // <-- always null Bitmap photo = BitmapFactory.decodeStream(photoDataStream); return photo; } public class ContactInfo { private long id; private String displayName; private String phoneNumber; private Uri photoUri; public void setDisplayName(String displayName) { this.displayName = displayName; } public String getDisplayName() { return displayName; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public String getPhoneNumber() { return phoneNumber; } public Uri getPhotoUri() { return this.photoUri; } public void setPhotoUri(Uri photoUri) { this.photoUri = photoUri; } public long getId() { return this.id; } public void setId(long id) { this.id = id; } } } Clearly, I'm doing something wrong here, but I can't seem to figure out what the problem is. Thanks.

    Read the article

  • How can I create photo effects in Android?

    - by PaulH
    I'd like to make an Android app that lets a user apply cool effects to photos taken with the camera. There are already a few out there, I know, but I'd like to try my own hand at one. I'm trying to figure out the best way to implement these effects. Here are some examples from the excellent Vignette app (which I own): http://www.flickr.com/groups/vignetteforandroid/pool/ I have been googling and stack-overflowing, but so far I've mostly found some references to published papers or books. I am ordering this one from Amazon presently - Digital Image Processing: An Algorithmic Introduction using Java After some reading, I think I have a basic understanding of manipulating the RGB values for all the pixels in the image. My main question is how do I come up with a transformation that produces cool effects? By cool effects I mean some like those in the Vignette app or IPhone apps: ToyCamera Polarize I already have quite a bit of experience with Java, and I've made my first app for android already. Any ideas? Thanks in advance.

    Read the article

1