Search Results

Search found 1194 results on 48 pages for 'portable'.

Page 39/48 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How should I implement reverse AJAX in a Django application?

    - by Carson Myers
    How should I implement reverse AJAX when building a chat application in Django? I've looked at Django-Orbited, and from my understanding, this puts a comet server in front of the HTTP server. This seems fine if I'm just running the Django development server, but how does this work when I start running the application from mod_wsgi? How does having the orbited server handling every request scale? Is this the correct approach? I've looked at another approach (long polling) that seems like it would work, although I'm not sure what all would be involved. Would the client request a page that would live in its own thread, so as not to block the rest of the application? Would it even block? Wouldn't the script requested by the client have to continuously poll for information? Which of the approaches is more proper? Which is more portable, scalable, sane, etc? Are there other good approaches to this (aside from the client polling for messages) that I have overlooked?

    Read the article

  • rails + sheevaplug = rails home development server and more

    - by microspino
    Hello I'd like to build a "Rails Brick" using a Sheevaplug from Marvell (O.S. is Ubuntu out of the box but You can install other distributions on It). It will be a home server and a silent, low cost (99$) low energy development machine. I'd like to add rails RVM, lot of gems, git-based heroku like deployment, passenger + nginx. This way I could have a portable server with a complete development environment and maybe I could find a hosting company where I can co-locate a grid of this devices or I can sell It as a simple little server for 10 or less users offices, with some centralized rails services (I think to a CMS, a BLOG, a WIKI, calendar or whatever this little jewel could afford). The usb port could make It a print server too or a UMTS link to the web via HUAWEI like usb UMTS keys. Can you give me some hint about: Is this project a crazy-close-to-failure idea? Why? which gem would You include? which rails open source app would you suggest? I have already an Excito Bubba Server at home, I saw the TonidoPlug so It came up in my mind to build something similiar but Rails based (Bubba is PHP based, TonidoPlug I don't know but It does not seems a Rails thing).

    Read the article

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • Where to find algorithms for standard math functions?

    - by dsimcha
    I'm looking to submit a patch to the D programming language standard library that will allow much of std.math to be evaluated at compile time using the compile-time function evaluation facilities of the language. Compile-time function evaluation has several limitations, the most important ones being: You can't use assembly language. You can't call C code or code for which the source is otherwise unavailable. Several std.math functions violate these and compile-time versions need to be written. Where can I get information on good algorithms for computing things such as logarithms, exponents, powers, and trig functions? I prefer just high level descriptions of algorithms to actual code, for two reasons: To avoid legal ambiguity and the need to make my code look "different enough" from the source to make sure I own the copyright. I want simple, portable algorithms. I don't care about micro-optimization as long as they're at least asymptotically efficient. Edit: D's compile time function evaluation model allows floating point results computed at compile time to differ from those computed at runtime anyhow, so I don't care if my compile-time algorithms don't give exactly the same result as the runtime version as long as they aren't less accurate to a practically significant extent.

    Read the article

  • Perl - How to get the number of elements in an anonymous array, for concisely trimming pathnames

    - by NXT
    Hi Everyone, I'm trying to get a block of code down to one line. I need a way to get the number of items in a list. My code currently looks like this: # Include the lib directory several levels up from this directory my @ary = split('/', $Bin); my @ary = @ary[0 .. $#ary-4]; my $res = join '/',@ary; lib->import($res.'/lib'); That's great but I'd like to make that one line, something like this: lib->import( join('/', ((split('/', $Bin)) [0 .. $#ary-4])) ); But of course the syntax $#ary is meaningless in the above line. Is there equivalent way to get the number of elements in an anonymous list? Thanks! PS: The reason for consolidating this is that it will be in the header of a bunch of perl scripts that are ancillary to the main application, and I want this little incantation to be more cut & paste proof. Thanks everyone There doesn't seem to be a shorthand for the number of elements in an anonymous list. That seems like an oversight. However the suggested alternatives were all good. I'm going with: lib->import(join('/', splice( @{[split('/', $Bin)]}, 0, -4)).'/lib'); But Ether suggested the following, which is much more correct and portable: my $lib = File::Spec->catfile( realpath(File::Spec->catfile($FindBin::Bin, ('..') x 4)), 'lib'); lib->import($lib);

    Read the article

  • Glassfish: Defining Custom JNDI Names for Session Beans

    - by Adeel Ansari
    Background: Want to use GF3 in development, where as actual SIT, UAT, and production is using WAS. Problem: With the remote session beans everything is good to go, as GF3 gives a non-standard JNDI name, which is same as what WAS suggests, i.e. an absolute class name. Now for the local session beans WAS use the same absolute class name but with the prefix, i.e. ejblocal:. Whereas GF3 doesn't give any non-standard JNDI name for local session beans. GF3 came up with only portable name, java:global/..I need to find a way so I can use the same names for both. I am using EJB 3.0, WAS 7.9, and Glassfish 3. Don't have any xml confiuration for ejbs. Using Spring to inject the bean in Struts2 actions. With remote interfaces both servers are okay and agreed on a single convention, but for locals they differ. Is there any solution for this? Or just sun-ejb-jar.xml will solve it? Thanks.

    Read the article

  • exception message getting lost in IIOP between glassfish domains

    - by Michael Borgwardt
    I'm running two glassfish v2 domains containing stateless session EJBs. In a few cases, an EJB in one domain has to call one in the other. My problem is that when the called EJB aborts with an exception, the caller does not receive the message of the exception and instead reports an internal error that is not helpful at all in diagnosing the problem. What happens seems to be this: At the transport layer, a org.omg.CORBA.portable.ApplicationException is created,which already loses all detail information about the exception except its class. Inside com.sun.jts.CosTransactions.TopCoordinator.get_txcontext(), the status of the transaction ass rolled back causes a org.omg.CosTransactions.Unavailable to be thrown, which gets wrapped and passed around a few times and eventually results into this error being displayed to the user: org.omg.CORBA.INVALID_TRANSACTION: vmcid: 0x0 minor code: 0 completed: No at com.sun.jts.CosTransactions.CurrentTransaction.sendingRequest(CurrentTransaction.java:807) at com.sun.jts.CosTransactions.SenderReceiver.sending_request(SenderReceiver.java:139) at com.sun.jts.pi.InterceptorImpl.send_request(InterceptorImpl.java:344) at com.sun.corba.ee.impl.interceptors.InterceptorInvoker.invokeClientInterceptorStartingPoint(InterceptorInvoker.java:271) at com.sun.corba.ee.impl.interceptors.PIHandlerImpl.invokeClientPIStartingPoint(PIHandlerImpl.java:348) at com.sun.corba.ee.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:284) at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:184) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:186) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:152) at com.sun.corba.ee.impl.presentation.rmi.bcel.BCELStubBase.invoke(BCELStubBase.java:225) Is there anything I can do here to preserve information about the actual cause of the problem?

    Read the article

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • What is the right approach to checksumming UDP packets

    - by mr.b
    I'm building UDP server application in C#. I've come across a packet checksum problem. As you probably know, each packet should carry some simple way of telling receiver if packet data is intact. Now, UDP already has 2-byte checksum as part of header, which is optional, at least in IPv4 world. Alternative method is to have custom checksum as part of data section in each packet, and to verify it on receiver. My question boils down to: is it better to rely on (optional) checksum in UDP packet header, or to make a custom checksum implementation as part of packet data section? Perhaps the right answer depends on circumstances (as usual), so one circumstance here is that, even though code is written and developed in .NET on Windows, it might have to run under platform-independent Mono.NET, so eventual solution should be compatible with other platforms. I believe that custom checksum algorithm would be easily portable, but I'm not so sure about the first one. Any thoughts? Also, shouts about packet checksumming in general are welcome.

    Read the article

  • problems with openGl on eclipse

    - by lego69
    I'm working on Windows XP I have portable version of Eclipse Galileo, but I didn't find there glut so I decided to add it using this link I made all steps and and now I'm trying to compile this code #include "GL/glut.h" #include "GL/gl.h" #include "GL/glu.h" /////////////////////////////////////////////////////////// // Called to draw scene void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT); // Flush drawing commands glFlush(); } /////////////////////////////////////////////////////////// // Setup the rendering state void SetupRC(void) { glClearColor(0.0f, 0.0f, 1.0f, 1.0f); } /////////////////////////////////////////////////////////// // Main program entry point void main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800,600); glutCreateWindow("Simple"); glutDisplayFunc(RenderScene); SetupRC(); glutMainLoop(); } and I have this errors Simple.o: In function `RenderScene': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:16: undefined reference to `_imp__glClear' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:20: undefined reference to `_imp__glFlush' Simple.o: In function `SetupRC': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:27: undefined reference to `_imp__glClearColor' Simple.o: In function `main': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:34: undefined reference to `glutInit' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:35: undefined reference to `glutInitDisplayMode' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:36: undefined reference to `glutInitWindowSize' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:37: undefined reference to `glutCreateWindow' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:38: undefined reference to `glutDisplayFunc' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:42: undefined reference to `glutMainLoop' collect2: ld returned 1 exit status please can somebody help me, thanks in advance

    Read the article

  • What is the general feeling about reflection extensions in std::type_info?

    - by Evan Teran
    I've noticed that reflection is one feature that developers from other languages find very lacking in c++. For certain applications I can really see why! It is so much easier to write things like an IDE's auto-complete if you had reflection. And certainly serialization APIs would be a world easier if we had it. On the other side, one of the main tenets of c++ is don't pay for what you don't use. Which makes complete sense. That's something I love about c++. But it occurred to me there could be a compromise. Why don't compilers add extensions to the std::type_info structure? There would be no runtime overhead. The binary could end up being larger, but this could be a simple compiler switch to enable/disable and to be honest, if you are really concerned about the space savings, you'll likely disable exceptions and RTTI anyway. Some people cite issues with templates, but the compiler happily generates std::type_info structures for template types already. I can imagine a g++ switch like -fenable-typeinfo-reflection which could become very popular (and mainstream libs like boost/Qt/etc could easily have a check to generate code which uses it if there, in which case the end user would benefit with no more cost than flipping a switch). I don't find this unreasonable since large portable libraries like this already depend on compiler extensions. So why isn't this more common? I imagine that I'm missing something, what are the technical issues with this?

    Read the article

  • event.clientX is readonly?

    - by Duracell
    Working in IE 8, mostly, but trying to write a portable solution for modern browsers. Using telerik controls. I'm catching the 'Showing' client-side event of the RadContextMenu and trying to adjust it's coordinates. The clientX, clientY and x,y members of the DOM event cannot be assigned a new value. Visual Studio breaks with a "htmlfile: Member not found" error. My goal is to get a RadContextMenu to show inside a RadEditor when the user clicks in it (under certain conditions, this is a requirement from management). So I capture the onclick event for the RadEditor's content area (radEditor.get_document().body;). I then call show(evt) on the context menu, where 'evt' is the event object corresponding to the click event. Because the RadEditor's content is in an IFRAME, you have to adjust the position of the click event before the context menu displays. This is done in the "Showing" event. However, I cannot assign a value to the members .clientX and friends. It's as if javascript has temporarily forgotten about integer + and += operators. Is it possible that these members have become readonly/const at some point? var evt = args.get_domEvent(); while (node) { evt.clientX += node.offsetLeft; //'Member not found' here. evt.clientY += node.offsetTop; node = node.offsetParent; } evt.clientY += sender.get_element().clientHeight;

    Read the article

  • WPD on XP, Vista, and 7 (need to transfer photo and video files)

    - by Bradley Dean
    I need to transfer files (still photos and videos) from any portable device that a user may connect (still camera, video camera, mobile phone, etc.) I don't need to worry about plain storage devices as these have drive letters. And I only care about existing files, I don't care about live video, preview video, taking new pictures, etc. I originally tried WIA, which works great except it can not transfer video files. So then I tried WPD, following along with dimeby8's tutorial: http://blogs.msdn.com/b/dimeby8/archive/2006/09/27/774259.aspx I haven't gotten the transfer working yet (I'm converting it over to C#), but I can at least see the device and enumerate the files in Win7. In XP I get nothing. It's pointed out in this thread that WPD won't enumerate devices on XP (see Lisa O [MSFT]'s post): http://social.msdn.microsoft.com/Forums/en/windowssdk/thread/56459945-b757-45df-8c9f-4ebdbbb18a2c So WIA is out because it won't do video. And WPD is out because it won't do XP. Has anyone gotten this to work? Am I missing something simple here? Thanks.

    Read the article

  • Can I ignore a SIGFPE resulting from division by zero?

    - by Mikeage
    I have a program which deliberately performs a divide by zero (and stores the result in a volatile variable) in order to halt in certain circumstances. However, I'd like to be able to disable this halting, without changing the macro that performs the division by zero. Is there any way to ignore it? I've tried using #include <signal.h> ... int main(void) { signal(SIGFPE, SIG_IGN); ... } but it still dies with the message "Floating point exception (core dumped)". I don't actually use the value, so I don't really care what's assigned to the variable; 0, random, undefined... EDIT: I know this is not the most portable, but it's intended for an embedded device which runs on many different OSes. The default halt action is to divide by zero; other platforms require different tricks to force a watchdog induced reboot (such as an infinite loop with interrupts disabled). For a PC (linux) test environment, I wanted to disable the halt on division by zero without relying on things like assert.

    Read the article

  • Python OOP and lists

    - by Mikk
    Hi, I'm new to Python and it's OOP stuff and can't get it to work. Here's my code: class Tree: root = None; data = []; def __init__(self, equation): self.root = equation; def appendLeft(self, data): self.data.insert(0, data); def appendRight(self, data): self.data.append(data); def calculateLeft(self): result = []; for item in (self.getLeft()): if (type(item) == type(self)): data = item.calculateLeft(); else: data = item; result.append(item); return result; def getLeft(self): return self.data; def getRight(self): data = self.data; data.reverse(); return data; tree2 = Tree("*"); tree2.appendRight(44); tree2.appendLeft(20); tree = Tree("+"); tree.appendRight(4); tree.appendLeft(10); tree.appendLeft(tree2); print(tree.calculateLeft()); It looks like tree2 and tree are sharing list "data"? At the moment I'd like it to output something like [[20,44], 10, 4], but when I tree.appendLeft(tree2) I get RuntimeError: maximum recursion depth exceeded, and when i even won't appendLeft(tree2) it outputs [10, 20, 44, 4] (!!!). What am I missing here? I'm using Portable Python 3.0.1. Thank you

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Cross-platform and language (de)serialization

    - by fwgx
    I'm looking for a way to serialize a bunch of C++ structs in the most convenient way so that the serialization is portable across C++ and Java (at a minimum) and across 32bit/64bit, big/little endian platforms. The structures to be serialized just contain data, i.e. they're pure data objects with no state or behavior. The idea being that we serialize the structs into an octet blob that we can store in a database "generically" and be read out later on. Thus avoiding changing the database whenever a struct changes and also avoiding assigning each data member to a field - i.e. we only want one table to hold everything "generically" as a binary blob. This should make less work for developers and require less changes when structures change. I've looked at boost.serialize but don't think there's a way to enable compatibility with Java. And likewise for inheriting Serializable in Java. If there is a way to do it by starting with an IDL file that would be best as we already have IDL files that describe the structures. Cheers in advance!

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • Suggestions for a Cron like scheduler in Python?

    - by jamesh
    I'm looking for a library in Python which will provide at and cron like functionality. I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron. For those unfamiliar with cron: you can schedule tasks based upon an expression like: 0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday 0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays. The cron time expression syntax is less important, but I would like to have something with this sort of flexibility. If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received. Edit I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process. To this end, I'm looking for the expressivity of the cron time expression, but in Python. Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.

    Read the article

  • Python/Sqlite program, write as browser app or desktop app?

    - by ChrisC
    I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision. Here is my situation 1) It'll be a desktop program, run locally on each machine, all Windows 2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc, 3) I'm thinking about using a browser interface because (a) from what I've read, browser apps can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal. 4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run (If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?) For my situation, should I make it a desktop app or browser app?

    Read the article

  • Cross-Platform Camera API

    - by Karim
    Hi, I'm now building a video transforming filter that have to transform video frames in real-time. One of the key requirements of the filter is to have high performance to minimize the number of dropped frames during the transform. Another requirement that is of lower priority but also nice to have is to make it cross-platform (both PC's and Mobile devices). The application is built in C++. Now my question is: is there any API that is more portable and has a similar or better performance characteristics than DirectShow? as DirectShow's portability is only limited to Windows-based devices (PCs and Windows Mobile&CE platforms). Also I've notices that for example using HTC's custom camera API has far better performance than what DirectShow offers. If you want to check this, try to build a filter in DirectShow that will multiply each color by 2 and render that in real-time from camera on the screen. Then do the same with HTC's API. There is almost 4-5x performance boost with vendor's specific API. So it'd be very nice if the library used the device-specific implementation of the driver, as performance is critical when doing this transforms on a mobile device (which is about ~500 MHz).

    Read the article

  • Signable, streamable, "readable" archive format?

    - by alexvoda
    Is there any archive format that offers the following: be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file) be stream-able - be able to be opened even if not all of the content has been transfered (also not strictly linearly) be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want) be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike licence so it doesn't allow comercial use, BSD on the other hand alows it) Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded. I am not interested in: being able to be compressed being supported on legacy systems Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ? If there is, are there programing libraries available for the above mentioned OSs? If not, would it be hard to create such a thing? EDIT: clarrified piont 5 EDIT 2: added a note to clarify point 1 and 2 P.S.: This is my first question on StackOverflow

    Read the article

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • Agile language for 2d game prototypes?

    - by instanceofTom
    Occasionally ( read: when my fiancé allows ) I like to prototype different game or game-like ideas I have. Usually I use Java or C# (not xna yet) because they are the languages I have the most practice with. However I would like to learn something more suited to agile development; a language in which it would be easier to knock out quick prototypes. At my job I have recently been working with looser (weak/dynamically typed) languages, specifically python and groovy, and I think something similar would fit what I am looking for. So, my question is: What languages (and framework/engine) would be good for rapidly developing prototypes of 2d game concepts? A few notes: I don't need blazing fast bitcrunching performance. In this case I would strongly prefer ease of development over performance. I'd like to use a language with a healthy community, which to me means a fair amount of maintained 3rd party, libraries. I'd like the language to be cross-platform friendly, I work on a variety of different operating systems and would like something that is portable with minimum effort. I can't imagine myself using a language with out decent options for debugging and editor syntax highlighting support. Note: If you are aware of a Java or C# library/framework that you think streamlines producing game prototypes I open to learning something new for those languages too

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >