Search Results

Search found 1733 results on 70 pages for 'boost asio'.

Page 36/70 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Intel TurboBoost not working under 12.04 LTS

    - by Panák Tibor
    Please help me someone :) My notebook has Intel core i5 3337U CPU with Intel turbo boost. Under 12.04 the turbo boost is not working properly. It's frequency maximum is 1.8Ghz but the CPU can run 2.7GHz. How can I fix it? pano@dell-inspiron:~$ grep MHz /proc/cpuinfo cpu MHz : 774.000 cpu MHz : 1801.000 cpu MHz : 774.000 cpu MHz : 1801.000 pano@dell-inspiron:~$ grep MHz /proc/cpuinfo cpu MHz : 774.000 cpu MHz : 1801.000 cpu MHz : 774.000 cpu MHz : 1801.000 pano@dell-inspiron:~$ grep MHz /proc/cpuinfo cpu MHz : 774.000 cpu MHz : 1801.000 cpu MHz : 1800.000 cpu MHz : 1801.000 pano@dell-inspiron:~$ grep MHz /proc/cpuinfo cpu MHz : 774.000 cpu MHz : 1801.000 cpu MHz : 774.000 cpu MHz : 1700.000

    Read the article

  • MVC 2 in 2 Minutes!

    - by Steve Michelotti
    In a couple of recent Code Camps, I’ve given my presentation: Top 10 Ways MVC 2 Will Boost Your Productivity. In the presentation, I cover all major new features introduced in MVC 2 with a focus on productivity enhancements. To drive the point home, I conclude with a final demo where I build a couple of screens from scratch highlighting many (but not all) of the features previously covered in the talk. A couple of weeks ago, I was asked to make it available online so here it is. In 2 minutes the demo builds a couple screens from scratch that provide a goal setting tracker for a user. MVC 2 features included in the video are: Template Helpers / Editor Templates Server-side/Client-side Validation Model Metadata for View Model HTML Encoding Syntax Dependency Injection Abstract Controllers Custom T4 Templates Custom MVC Visual Studio 2010 Code Snippets The complete code samples and slide deck can be downloaded here: Top 10 Ways MVC 2 Will Boost Your Productivity. Enjoy! (Right-click and Zoom to view in full screen)   Click here for Direct link to video

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Jump-starting Integration Projects with Oracle AIA Foundation Pack”

    - by Lionel Dubreuil
    Don’t miss this “CON8769 - Jump-starting Integration Projects with Oracle AIA Foundation Pack“session: Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Marriott Marquis - Salon 7 Speakers: Robert Wunderlich - Principal Product Manager, Oracle Munazza Bukhari - Group Manager, AIA FP Product Management, Oracle The Oracle Application Integration Architecture Foundation Pack development lifecycle prescribes the best practice methodology for developing integrations between applications. The lifecycle is supported by a toolset that focuses on the architects and developers. Attend this session to understand how Oracle AIA Foundation Pack can jump-start integration project development and boost developer productivity. It demonstrates what the product does today and showcases new features such as support for building direct integrations. Objectives for this session are to: Understand how to boost developer productivity Hear about support for direct integrations Learn what’s new in Oracle AIA Foundation Pack

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Jump-starting Integration Projects with Oracle AIA Foundation Pack”

    - by Lionel Dubreuil
    Don’t miss this “CON8769 - Jump-starting Integration Projects with Oracle AIA Foundation Pack“session: Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Marriott Marquis - Salon 7 Speakers: Robert Wunderlich - Principal Product Manager, Oracle Munazza Bukhari - Group Manager, AIA FP Product Management, Oracle The Oracle Application Integration Architecture Foundation Pack development lifecycle prescribes the best practice methodology for developing integrations between applications. The lifecycle is supported by a toolset that focuses on the architects and developers. Attend this session to understand how Oracle AIA Foundation Pack can jump-start integration project development and boost developer productivity. It demonstrates what the product does today and showcases new features such as support for building direct integrations. Objectives for this session are to: Understand how to boost developer productivity Hear about support for direct integrations Learn what’s new in Oracle AIA Foundation Pack

    Read the article

  • Are Sony Vaio S series or T series laptops compatible with both windows 7 and Ubuntu 12.04 in dual-boot installation

    - by rini
    I am planning to buy a new laptop which is suitable for both Ubuntu 12.04 and windows 7 in dual boot configuration. I am looking for Sony Vaio two models with configurations: 13.3" S series Customizable laptop ( SVS131190S) 3rd gen Intel® Core™ i5-3210M processor (2.50GHz / 3.10GHz with Turbo Boost) Intel® HD Graphics 4000 4GB (4GB fixed onboard + 1 open slot) DDR3-1333Mhz 500GB (5400rpm) hard drive 13.3" T series Customizable ultrabook 3rd gen Intel® Core™ i7-3517U processor (1.90GHz / 3.00GHz with Turbo Boost) Intel® HD Graphics 4000 500GB (5400rpm) + 32GB MLC hybrid hard drive with RAID 0 6GB (4GB fixed onboard + 2GB removable) DDR3-1333Mhz Can anyone please tell me which laptop will function better for dual booting? Any help/comments are really appreciated.

    Read the article

  • Generating CMakeLists.txt

    - by vanna
    I got a bunch of C++ sources files and headers. They may use external libraries such as Boost e.g. I am interested in the process of building binaries for Windows and *nix. Makefiles (*nix) and .vcproj (Windows) call compilers with some specifications such as the order of compilation, compilation options and stuff. CMakeLists.txt can be used by CMake to build either makefiles or .vcproj and use very helpful commands such as recursive search of files, automatic linkage with known libraries, installers, variables that can be used in source files... Is there any existing tool that would generate a CMakeLists.txt from specified options ? Options could be like : scan this folder and make a library out of it, then scan this other folder and make an executable and automatically link both with Boost as well along with a user friendly installer with generated INSTALL.txt and README.txt. Something very powerful like that.

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • Why does my finite state machine take so long to execute?

    - by BillyONeal
    Hello all :) I'm working on a state machine which is supposed to extract function calls of the form /* I am a comment */ //I am a comment perf("this.is.a.string.which\"can have QUOTES\"", 123456); where the extracted data would be perf("this.is.a.string.which\"can have QUOTES\"", 123456); from a file. Currently, to process a 41kb file, this process is taking close to a minute and a half. Is there something I'm seriously misunderstanding here about this finite state machine? #include <boost/algorithm/string.hpp> std::vector<std::string> Foo() { std::string fileData; //Fill filedata with the contents of a file std::vector<std::string> results; std::string::iterator begin = fileData.begin(); std::string::iterator end = fileData.end(); std::string::iterator stateZeroFoundLocation = fileData.begin(); std::size_t state = 0; for(; begin < end; begin++) { switch (state) { case 0: if (boost::starts_with(boost::make_iterator_range(begin, end), "pref(")) { stateZeroFoundLocation = begin; begin += 4; state = 2; } else if (*begin == '/') state = 1; break; case 1: state = 0; switch (*begin) { case '*': begin = boost::find_first(boost::make_iterator_range(begin, end), "*/").end(); break; case '/': begin = std::find(begin, end, L'\n'); } break; case 2: if (*begin == '"') state = 3; break; case 3: switch(*begin) { case '\\': state = 4; break; case '"': state = 5; } break; case 4: state = 3; break; case 5: if (*begin == ',') state = 6; break; case 6: if (*begin != ' ') state = 7; break; case 7: switch(*begin) { case '"': state = 8; break; default: state = 10; break; } break; case 8: switch(*begin) { case '\\': state = 9; break; case '"': state = 10; } break; case 9: state = 8; break; case 10: if (*begin == ')') state = 11; break; case 11: if (*begin == ';') state = 12; break; case 12: state = 0; results.push_back(std::string(stateZeroFoundLocation, begin)); }; } return results; } Billy3

    Read the article

  • Complex error handling

    - by Caspin
    I've got a particularly ornery piece of network code. I'm using asio but that really doesn't matter for this question. I assume there is no way to unbind a socket other than closing it. The problem is that open(), bind(), and listen() can all throw a system_error. So I handled the code with a simple try/catch. The code as written in broken. using namespace boost::asio; class Thing { public: ip::tcp::endpoint m_address; ip::tcp::acceptor m_acceptor; /// connect should handle all of its exceptions internally. bool connect() { try { m_acceptor.open( m_address.protocol() ); m_acceptor.set_option( tcp::acceptor::reuse_address(true) ); m_acceptor.bind( m_address ); m_acceptor.listen(); m_acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); m_acceptor.close(); return false; } return true; } /// don't call disconnect unless connect previously succeeded. void disconnect() { // other stuff needed to disconnect is ommited m_acceptor.close(); } }; The error is if the socket fails to connect it will try to close it in the catch block and throw another system_error about closing an acceptor that has never been opened. One solution is to add an if( acceptor.is_open() ) in the catch block but that tastes wrong. Kinda like mixing C-style error checking with c++ exceptions. If I where to go that route, I may as well use the non-throwing version of open(). boost::system::error_code error; acceptor.open( address.protocol, error ); if( ! error ) { try { acceptor.set_option( tcp::acceptor::reuse_address(true) ); acceptor.bind( address ); acceptor.listen(); acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); acceptor.close(); return false; } } return !error; Is there an elegant way to handle these possible exceptions using RAII and try/catch blocks? Am I just wrong headed in trying to avoid if( error condition ) style error handling when using exceptions?

    Read the article

  • Portability of pthreads-win32 over various compilers.

    - by Artyom
    Hello, I'm using pthreads-win32 for portable threading support for windows. At least, according to the documentation pthreads-win32 should work with MSVC and even MSVC builds provided. But I don't know if the library is tested with latest MSVC compilers like MSVC-2008 and if it is supported under 64bit windows. Does anybody aware of any issues with this library? Note: Do not even try to recommend using Boost.Thread, I'm not interested in. And I'm familiar with Boost.Thread library

    Read the article

  • Is this an idiomatic way to pass mocks into objects?

    - by Billy ONeal
    I'm a bit confused about passing in this mock class into an implementation class. It feels wrong to have all this explicitly managed memory flying around. I'd just pass the class by value but that runs into the slicing problem. Am I missing something here? Implementation: namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } }; namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } ~File() { fileApi->CloseHandle(hWin32); } }; Tests: namespace detail { struct MockFileApi : public FileApi { MOCK_METHOD7(CreateFileW, HANDLE(LPCWSTR, DWORD, DWORD, LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE)); MOCK_METHOD1(CloseHandle, void(HANDLE)); }; } using namespace detail; using namespace testing; TEST(Test_File, OpenPassesArguments) { MockFileApi * api = new MockFileApi; EXPECT_CALL(*api, CreateFileW(Eq(L"BozoFile"), Eq(56), Eq(72), Eq(reinterpret_cast<LPSECURITY_ATTRIBUTES>(67)), Eq(98), Eq(102), Eq(reinterpret_cast<HANDLE>(98)))) .Times(1).WillOnce(Return(reinterpret_cast<HANDLE>(42))); File test(L"BozoFile", 56, 72, reinterpret_cast<LPSECURITY_ATTRIBUTES>(67), 98, 102, reinterpret_cast<HANDLE>(98), api); }

    Read the article

  • Why is overloading operator&() prohibited for classes stored in STL containers?

    - by sharptooth
    Suddenly in this article ("problem 2") I see a statement that C++ Standard prohibits using STL containers for storing elemants of class if that class has an overloaded operator&(). Having overloaded operator&() can indeed be problematic, but looks like a default "address-of" operator can be used easily through a set of dirty-looking casts that are used in boost::addressof() and are believed to be portable and standard-compilant. Why is having an overloaded operator&() prohibited for classes stored in STL containers while the boost::addressof() workaround exists?

    Read the article

  • conditionally enabling constructor

    - by MK
    Here is how I can conditionally enable a constructor of a class : struct Foo { template<class T> Foo( T* ptr, boost::enable_if<is_arithmetic<T> >::type* = NULL ) {} }; I would like to know why I need to do the enabling via a dummy parameter. Why can I not just write : struct Foo { template<class T> Foo( boost::enable_if<is_arithmetic<T>, T>::type* = NULL ) {} };

    Read the article

  • C++, what does this syntax mean?

    - by aaa
    i found this in this file: http://www.boost.org/doc/libs/1_43_0/boost/spirit/home/phoenix/core/actor.hpp What does this syntax means? struct actor ... { ... template <typename T0, typename T1> typename result<actor(T0&,T1&)>::type // this line thank you

    Read the article

  • Use of for_each on map elements

    - by Antonio
    I have a map where I'd like to perform a call on every data type object member function. I yet know how to do this on any sequence but, is it possible to do it on an associative container? The closest answer I could find was this: Boost.Bind to access std::map elements in std::for_each. But I cannot use boost in my project so, is there an STL alternative that I'm missing to boost::bind? If not possible, I thought on creating a temporary sequence for pointers to the data objects and then, call for_each on it, something like this: class MyClass { public: void Method() const; } std::map<int, MyClass> Map; //... std::vector<MyClass*> Vector; std::transform(Map.begin(), Map.end(), std::back_inserter(Vector), std::mem_fun_ref(&std::map<int, MyClass>::value_type::second)); std::for_each(Vector.begin(), Vector.end(), std::mem_fun(&MyClass::Method)); It looks too obfuscated and I don't really like it. Any suggestions?

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >