Search Results

Search found 1194 results on 48 pages for 'portable'.

Page 34/48 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • What's the coolest hack you've seen or done?

    - by Robert S.
    As programmers, we've all put together a really cool program or pieced together some hardware in an interesting way to solve a problem. Today I was thinking about those hacks and how some of them are deprecated by modern technology (for example, you no longer need to hack your Tivo to add a network port). In the software world, we take things like drag-and-drop on a web page for granted now, but not too long ago that was a pretty exciting hack as well. One of the neatest hardware hacks I've seen was done by a former coworker at a telecom company years ago. He had a small portable television in his office and he would watch it all day long while working. To get away with it, he wired a switch to the on/off that was activated via his foot under his desk. What's the coolest hardware or software hack you've personally seen or done? What hack are you working on right now?

    Read the article

  • When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete?

    - by LeopardSkinPillBoxHat
    I know that the OS will sometimes initialise memory with certain patterns such as 0xCD and 0xDD. What I want to know is when and why this happens. When Is this specific to the compiler used? Do malloc/new and free/delete work in the same way with regard to this? Is it platform specific? Will it occur on other operating systems, such as Linux or VxWorks? Why My understanding is this only occurs in Win32 debug configuration, and it is used to detect memory overruns and to help the compiler catch exceptions. Can you give any practical examples as to how this initialisation is useful? I remember reading something (maybe in Code Complete 2) that it is good to initialise memory to a known pattern when allocating it, and certain patterns will trigger interrupts in Win32 which will result in exceptions showing in the debugger. How portable is this?

    Read the article

  • XML serialization and MS/Mono portability

    - by Luca
    I'm trying to have classes serialized using MS runtime and Mono runtime. While using MS runtime everything goes fine, but using Mono I give me some exception and program startup. The following exception are thrown: There was an error reflecting a type: System.TypeInitializationException (a class) There was an error reflecting a type: System.InvalidOperationException (a class) There was an error reflecting a field: System.ArgumentOutOfRangeException < 0 (an array of classes) The binary was compiled using MS SDK, but I don't think this is the problem. What's going on? .NET shouln't be portable? How to solve these exceptions?

    Read the article

  • Locking a file to verify a single execution of a service. How reliable?

    - by Camilo Díaz
    Hello, I am deploying a little service to an UNIX(AIX) system. I want to check if there is no active instance of that service running when starting it. How reliable is to implement that check like this? Try to acquire a lock on a file (w/ FileChannel) If succeeds, keep lock and continue execution If fails, exit and refuse to run the main body I am aware of software like the Tanuki wrapper, however, I'm longing for a simpler(maybe not portable) solution. Regarding PIDFILE(s): I want to avoid using them if possible, as I don't have administrative rights on the machine, neither knowledge in AIX's shell programming.

    Read the article

  • Empty or "flush" a file descriptor without read()?

    - by Teddy
    (Note: This is not a question of how to flush a write(). This is the other end of it, so to speak.) Is it possible to empty a file descriptor that has data to be read in it without having to read() it? You might not be interested in the data, and reading it all would therefore waste space and cycles you might have better uses for. If it is not possible in POSIX, do any operating systems have any non-portable ways to do this? UPDATE: Please note that I'm talking about file descriptors, not streams.

    Read the article

  • Dynamic WSDL Location in .NET

    - by wadetandy
    I am building a C# application that is consuming a WSDL that is hosted by a server on our network. When I use the "Add Web Reference" functionality of Visual Studio, it works just fine, saving the ip address of the machine, etc. and the SOAP calls work without any issue. We are now making this entire application portable so that it can be installed in any environment. We would like to place all of our settings in one configuration file, so my question is this: Is it possible to somehow specify the IP address of the machine that is hosting the SOAP service in my configuration file and link everything dynamically at runtime?

    Read the article

  • When to choose C over C++?

    - by aaa
    Hi. I have become a fond of C++ thanks to this website. Before, I programmed exclusively in C/Fortran, thinking that C++ was too slow (not anymore). Is there a reason to write new project purely in C? this is besides obvious things like low-level kernel/system components. What about intermediate things, like communication libraries, for example MPI? Is C still more portable than C++? I have messed with pretty exotic systems, like Cray, but have yet to see non-embedded system without C++. thanks

    Read the article

  • Appropriate high level language to deal with binary data

    - by fortran
    Hi, I need to write a small tool that parses a textual input and generates some binary encoded data. I would prefer to stay away from C and the like, in favour of a higher level, (optionally) safer, more expressive and faster to develop language. My language of choice for this kind of tasks usually is Python, but for this case dealing with binary raw data can be problematic if one isn't very careful with the numbers being promoted to bignums, sign extensions and such. Ideally I would like to have records with named bitfields that are portable to be serialised in a consistent manner. (I know that there's a strong point in doing it in a language I already master, although it isn't optimal, but I think this could be a good opportunity to learn something new). Thanks.

    Read the article

  • Using typedefs (or #defines) on built in types - any sensible reason?

    - by jb
    Well I'm doing some Java - C integration, and throught C library werid type mappings are used (theres more of them;)): #define CHAR char /* 8 bit signed int */ #define SHORT short /* 16 bit signed int */ #define INT int /* "natural" length signed int */ #define LONG long /* 32 bit signed int */ typedef unsigned char BYTE; /* 8 bit unsigned int */ typedef unsigned char UCHAR; /* 8 bit unsigned int */ typedef unsigned short USHORT; /* 16 bit unsigned int */ typedef unsigned int UINT; /* "natural" length unsigned int*/ Is there any legitimate reason not to use them? It's not like char is going to be redefined anytime soon. I can think of: Writing platform/compiler portable code (size of type is underspecified in C/C++) Saving space and time on embedded systems - if you loop over array shorter than 255 on 8bit microprocessor writing: for(uint8_t ii = 0; ii < len; ii++) will give meaureable speedup.

    Read the article

  • Lazy evaluation with ostream C++ operators

    - by SavinG
    I am looking for a portable way to implement lazy evaluation in C++ for logging class. Let's say that I have a simple logging function like void syslog(int priority, const char *format, ...); then in syslog() function we can do: if (priority < current_priority) return; so we never actually call the formatting function (sprintf). On the other hand, if we use logging stream like log << LOG_NOTICE << "test " << 123; all the formating is always executed, which may take a lot of time. Is there any possibility to actually use all the goodies of ostream (like custom << operator for classes, type safety, elegant syntax...) in a way that the formating is executed AFTER the logging level is checked ?

    Read the article

  • AIO network sockets and zero-copy under Linux

    - by remyhorton
    I have been experimenting with async Linux network sockets (aio_read et al in aio.h/librt), and one thing i have been trying to find out is whether these are zero-copy or not. Pretty much all i have read so far discusses file I/O, whereas its network I/O i am interested in. AIO is a bit of a pain to use and i suspect is non-portable, so wondering whether its worth persevering with it. Zero-copy is just about the only advantage (albiet a major one for my purposes) it would have over (non-blocking) select/epoll..

    Read the article

  • Getting meaningful error messages from fstream's in C++

    - by Hassan Syed
    What is the best way to get meaningful file access error messages, in a portable way from std::fstreams ? The primitiveness of badbits and failbits is getting to be bit annoying. I have written my own exception hierarchies against win32 and POSIX before, and that was far more flexible than the way the STL does it. I am getting "basic::ios_clear" as an error message from the what method of a downcasted catch (std::exception) of a fstream which has exceptions enabled. This doesn't mean much to me, although I do know what the problem is I'd like my program to be a tad more informative so that when I start deployment a few months later my life will be easier. Is there anything in Boost to extract meaningful messages out of the ofstream's implementation cross platform and cross STL implementation ?

    Read the article

  • Constrain a table to have only one row

    - by finnw
    What's the cleanest way to constrain a SQL table to allow it to have no more than one row? This related question discusses why such a table might exist, but not how the constraint should be implemented. So far I have only found hacks involving a unique key column that is constrained to have a specific value, e.g. ALWAYS_0 TINYINT NOT NULL PRIMARY KEY DEFAULT (0) CONSTRAINT CHECK_ALWAYS_0 CHECK (ALWAYS_0 = 0). I am guessing there is probably a cleaner way to do it. The ideal solution would be portable SQL, but a solution specific to MS SQL Server or postgres would also be useful

    Read the article

  • How can a running application in Linux/*nix determine its own absolute path?

    - by Dave Wade-Stein
    Suppose you run the application 'app' by typing 'app', rather than its absolute path. Due to your $PATH variable, what actually runs is /foo/bar/app. From inside app I'd like to determine /foo/bar/app. argv[0] is just 'app', so that doesn't help. I know in Linux I can get do pid = getpid(); and then look at the /proc/pid/exe softlink, but that doesn't work on other *nix. Is there a more portable way to determine the dir in which the app lives?

    Read the article

  • DNS Lookup in ASP.Net

    - by Dave Forgac
    I have a Windows server that is intermittently losing the ability to lookup DNS information. I'm trying to get to the root cause of the problem but in the mean time I'd like to be able to monitor whether the server can perform lookups. Basically, it should attempt to lookup some common hostnames and the display 'Success' if the lookups are successful. I see lots of examples of doing this with third party components in ASP.NET but I would prefer to be able to do this with a single ASP.Net script that would be portable and not require anything additional be installed.

    Read the article

  • Embedded MongoDB when running integration tests

    - by seanhodges
    My question is a variation of this one. Since my Java Web-app project requires a lot of read filters/queries and interfaces with tools like GridFS, I'm struggling to think of a sensible way to simulate MongoDB in the way the above solution suggests. Therefore, I'm considering running an embedded instance of MongoDB alongside my integration tests. I'd like it to start up automatically (either for each test or the whole suite), flush the database for every test, and shut down at the end. These tests might be run on development machines as well as the CI server, so my solution will also need to be portable. Can anyone with more knowledge on MongoDB help me get idea of the feasibility of this approach, and/or perhaps suggest any reading material that might help me get started? I'm also open to other suggestions people might have on how I could approach this problem...

    Read the article

  • Modifying an image with OpenGL ?

    - by chmike
    I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely. I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way. I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ? Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.

    Read the article

  • Tips on how to deploy C++ code to work every where

    - by User1
    I'm not talking about making portable code. This is more a question of distribution. I have a medium-sized project. It has several dependencies on common libraries (eg openssl, zlib, etc). It compiles fine on my machine and now it's time to give it to the world. Essentially build engineering at its finest. I want to make installers for Windows, Linux, MacOSX, etc. I want to make a downloadable tar ball that will make the code work with a ./configure and a make (probably via autoconf). It would be icing on the cake to have a make option that would build the installers..maybe even cross-compile so a Windows installer could be built in Linux. What is the best strategy? Where can I expect to spend the most time? Should the prime focus be autoconf or are there other tools that can help?

    Read the article

  • Is select() Ok to implement single socket read/write timeout ?

    - by chmike
    I have an application processing network communication with blocking calls. Each thread manages a single connection. I've added a timeout on the read and write operation by using select prior to read or write on the socket. Select is known to be inefficient when dealing with large number of sockets. But is it ok, in term of performance to use it with a single socket or are there more efficient methods to add timeout support on single sockets calls ? The benefit of select is to be portable.

    Read the article

  • C++: Binding to a base class

    - by Helltone
    The following code works, but I'm not sure it is correct/portable. #include <iostream> #include <tr1/functional> class base { public: base(int v) : x(v) {} protected: int x; }; class derived : public base { public: bool test() { return (x == 42); } }; int main(int argc, char* argv[]) { base b(42); if(std::tr1::bind((bool (base::*)()) &derived::test, b)()) { std::cout << "ok\n"; } return 0; }

    Read the article

  • Make C# source run as a script?

    - by acidzombie24
    I am doing a little scripting and i find some more power would be nice. Like the ability to keep trying to delete a file with a 1sec delay AND have it portable since i spent some time today translating a bat script to bash. I know i can use php or python but i VERY MUCH PREFER static/compile time checking. Is there a way to run C# code as a script? I am hoping i dont have to create a custom ext and write a app to dynamically compile and execute the script (i know have source to compile .js somewhere...). Does anyone know of a solution?

    Read the article

  • Where to intercept resolution of controller/view in ASP.Net MVC for customizations?

    - by Jason Jackson
    I am trying to figure out where the appropriate place is to intercept the resolution of what view + controller is being called in ASP.Net MVC 2. I have a situation where I have a controller and a corresponding set of views. I also have the possibility of a customized version of both the controller and N of the views sitting in the project (or we may use something like Portable Views from the MvcContrib project). If the customized version of the controller or view(s) exists at run time, and the user satisfies certain criteria, I need to call the customized controller and use the appropriate customized view. At design/compile time we don't know what customizations may be in place. My first run at this was by using a custom controller factory that returns a custom controller if it exists. However, this controller is "wired up" to the standard view, and I cannot figure out how to return the customized view if it also exists. To complicate matters, there may be no customized controller but customized views, and visa-versa.

    Read the article

  • check whether fgets would block

    - by lv
    Hi, I was just wondering whether in C is it possible to peek in the input buffer or perform similar trickery to know whether a call to fgets would block at a later time. Java allows to do something like that by calling BufferedReader.ready(), this way I can implement console input something like this: while (on && in.ready()) { line = in.readLine(); /* do something with line */ if (!in.ready()) Thread.sleep(100); } this allows an external thread to gracefully shutdown the input loop by setting on to false; I'd like to perform a similar implementation in C without resorting to non portable tricks, I already know I can make a "timed out fgets" under unix by resorting to signals or (better, even though requering to take care of buffering) reimplement it on top of recv/select, but I'd prefer something that would work on windows too. TIA

    Read the article

  • QLocale, what is the scope of the global QLocale::setDefault()?

    - by ALoopingIcon
    Problem: I have a QT based multiplatform (win,mac,*nix) application that parses ascii files containing decimal numbers. parsing is done using a variety of different code pieces that use anything from qt string stuff, c++ stdin, oldstyle scanf, etc. ascii files have always the '.' (dot) as separated decimal (e.g. in the file to be parsed 1/10 is written 0.1 as standard in many countries). people using the application within a OS localized for using comma separated decimal encounter a lot of problems (e.g. for french users scanf expect to find 0,1 as a valid textual representation of 1/10 and if they find 0.1 scanf will parse it as 0) How can I be sure that the OS Locale indication of how decimal point has to be written is always ignored? Is it safe assuming that adding QLocale::setDefault(QLocale(QLocale::English,QLocale::UnitedStates)); is enough to get rid of all these problems? Any suggestion for portable ways of setting the locale globally?

    Read the article

  • How to change the app name in OSX menubar in a pure-Python application bundle?

    - by gyim
    I am trying to create a pure-Python application bundle for a wxPython app. I created the .app directory with the files described in Apple docs, with an Info.plist file etc. The only difference between a "normal" app and this bundle is that the entry point (CFBundleExecutable) is a script which starts with the following line: #!/usr/bin/env python2.5 Everything works fine except that the application name in the OSX menubar is still "Python" although I have set the CFBundleName in Info.plist (I copied the result of py2app, actually). The full Info.plist can be viewed here: http://tinyurl.com/32qgpjt How can I change this? I have read everywhere that the menubar name is only determined by CFBundleName. How is it possible that the Python interpreter can change this in runtime? Note: I was using py2app before, but the result was too large (50 MB instead of the current 100KB) and it was not even portable between Leopard and Snow Leopard... so it seems to be much easier to create a pure-Python app bundle "by hand" than transforming the output of py2app.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >