Search Results

Search found 7872 results on 315 pages for 'unsupported hardware'.

Page 242/315 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Header file-name as argument

    - by Alphaneo
    Objective: I have a list of header files (about 50 of them), And each header-file has few arrays with constant elements. I need to write a program to count the elements of the array. And create some other form of output (which will be used by the hardware group). My solution: I included all the 50 odd files and wrote an application. And then I dumped all the elements of the array into the specified format. My environment: Visual Studio V6, Windows XP My problem: Each time there is a new set of Header files, I am now changing the VC++ project settings to point to the new set of header files, and then rebuild. My question: A bit in-sane though, Is there any way to mention the header from some command line arguments or something? I just want to avoid re-compiling the source every time...

    Read the article

  • Doing 64 bit manipulation using 32 bit data in Fixed point arithmetic using C.

    - by Viks
    Hi, I am stuck with a problem. I am working on a hardware which only does support 32 bit operations. sizeof(int64_t) is 4. Sizeof(int) is 4. and I am porting an application which assumes size of int64_t to be 8 bytes. The problem is it has this macro BIG_MULL(a,b) ( (int64_t)(a) * (int64_t)(b) 23) The result is always a 32 bit integer but since my system doesn't support 64 bit operation, it always return me the LSB of the operation, rounding of all the results making my system crash. Can someone help me out? Regards, Vikas Gupta

    Read the article

  • High-Performance In-Browser Networking

    - by Jon Purdy
    (Similar in spirit to but different in practice from this question.) Is there any cross-browser-compatible, in-browser technology that allows a high-performance perstistent network connection between a server application and a client written in, say, Javascript? Think XmlHttpRequest on caffeine. I am working on a visualisation system that's restricted to at most a few users at once, and the server is pretty robust, so it can handle as much as it needs to. I would like to allow the client to have access to video streamed from the server at a minimum of about 20 frames per second, regardless of what their graphics hardware capabilities are. Simply put: is this doable without resorting to Flash or Java?

    Read the article

  • How to profile the execution of an OSGi deployment?

    - by Jaime Soriano
    I'm starting the development of an OSGi bundle for an application that will be deployed in a device with some hardware limitations. I'd like to know how could I profile the execution of that bundle to be always sure that it's going to fit with its dependencies in the final device. It would be nice to have a profiler to know how much memory is each bundle using, to localize bottle necks and to compare different implementations of the same service. Is there any profiler for OSGi deployments or should I use a general Java profiler? For developing I'm using Pax runner with Apache felix to run the bundle and maven to manage project dependencies and building.

    Read the article

  • Automatically find compiler options for fastest exe on given machine?

    - by dehmann
    Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable? Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent. Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command? In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.

    Read the article

  • fast algorithm implementation to sort very small set

    - by aaa
    hello. This is the problem I ran into long time ago. I thought I may ask your for your ideas. assume I have very small set of numbers (integers), 4 or 8 elements, that need to be sorted, fast. what would be the best approach/algorithm? my approach was to use the max/min functions. I guess my question pertains more to implementation, rather than type of algorithm. At this point it becomes somewhat hardware dependent , so let us assume Intel 64-bit processor with SSE3 . Thanks

    Read the article

  • What one-time-password devices are compatible with mod_authn_otp?

    - by netvope
    mod_authn_otp is an Apache web server module for two-factor authentication using one-time passwords (OTP) generated via the HOTP/OATH algorithm defined in RFC 4226. The developer's has listed only one compatible device (the Authenex's A-Key 3600) on their website. If a device is fully compliant with the standard, and it allows you to recover the token ID, it should work. However, without testing, it's hard to tell whether a device is fully compliant. Have you ever tried other devices (software or hardware) with mod_authn_otp (or other open source server-side OTP program)? If yes, please share your experience :)

    Read the article

  • How to simulate different CPU frequency and limit RAM

    - by user351103
    Hi I have to build a simulator with C#. This simulator should be able to run a second thread with configureable CPU speed and limited RAM size, e.g. 144MHz and 50 MB. Of course I know that a simulator can never be as accurate as the real hardware. But I try to get almost similar performance. At the moment I'm thinking about creating a thread which I will stop/sleep from time to time. Depending on the desired CPU speed the simulator should adjust the sleep time of this thread and therefore simulate different cpu frequency. To measure the achieved speed I though about using PerformanceCounters. But with this approach I have the problem that I don't know how to limit the RAM size the thread could use. Do you have any ideas how to realize such a simulator? Thanks in advance!!

    Read the article

  • [Ubuntu] How can i log-in to Ubuntu using USB-serial console (rs232) ?

    - by marc
    Welcome, How can enable remote terminal login into Ubuntu 9.10 using usb-serial terminal ? I got created device ttyUSB0 and i want allow to log-in using hyper-terminal. I found some resources but they are related to real! hardware rs232 ports, i can't find any information about USB converter. Right now i have established connection between that usb-serial port and my laptop (i can send text writing to port cp sometext.txt /dev/ttyUSB0 and read using hyperterminal). Any idea ? Regards

    Read the article

  • Exceptions & Interrupts

    - by Betamoo
    When I was searching for a distinction between Exceptions and Interrupts, I found this question Interrupts and exceptions on SO... Some answers there were not suitable (at least for assembly level): "Exception are software-version of an interrupt" But there exist software interrupts!! "Interrupts are asynchronous but exceptions are synchronous" Is that right? "Interrupts occur regularly" "Interrupts are hardware implemented trap, exceptions are software implemented" Same as above! I need to find if some of these answers were right , also I would be grateful if anyone could provide a better answer... Thanks!

    Read the article

  • Is there a Novatel Wireless Modem Emulator or something similar?

    - by David Brown
    Novatel Wireless provides their NovaCore SDK for developers wishing to interface with their line of modems. I'm currently developing an open source managed wrapper for it, but I'm having difficulties with testing. I own a Novatel MiFi and have mobile broadband service through Sprint, but that can only get me so far. The device is already activated, thus I can't test the network activation features of the NovaCore SDK. There are also certain features only available for HSPA modems, which I am not able to get in my area. Is there an emulator capable of emulating a Novatel Wireless modem so that I can test my library without physical hardware and an actual data connection? If not, do you have any other suggestions that might help in this situation? I've contacted Novatel Wireless via email and their developer forum, but have not received a response. Thanks!

    Read the article

  • IOKit header assert.h gone? [answered]

    - by Julian Kessel
    I want to get the hardware address of my mac's ethernet card. In all samples I saw in include on IOKit/assert.h . Which doesn't seem to exist on my system. GCC throws an error saying he doesn't know the type IOEthernetAddress. Is assert.h necessary for my task? It would be great if someone coud give me a working sample. [edit] here's my code, think this will help understanding the problem: #include <IOKit/assert.h> #include <IOKit/network/IOEthernetController.h> #include <IOKit/network/IOEthernetInterface.h> int main(){ IOEthernetAddress addr; getHardwareAddress(&addr); printf("%x", addr); return 0; }

    Read the article

  • How to use Mobile Browser Definition File for a Phone vs SmartPhone seperation

    - by Denis Hoctor
    Hi all, I'm looking to revamp our mobile site with something simple for phones below the ambiguous smart phone category and something a little more interesting for the phones above this category. I'm not interested in WAP/WML for this project. I'm building a ASP.Net 4 MCV 2 app and using MBDF What I'd like to know is how best to define this differentiation when using MBDF? Screen size, Javascript, SpportsTouchScreen etc. are all in MBDF along with others but I'm not sure where to draw the line and where the data is most accurate for the broad number of devices. What do those of you out there developing for this spread of hardware & software split on? Thanks, Denis P.S. I've done my research on xHTML MP1.0 - 1.2 and the best practises for implementation to ensure broad coverage but I don't want to restrict the newer phones out there to what the base line can see.

    Read the article

  • Touch screens for kiosk applications

    - by Micah
    I'm developing a kiosk-style touchscreen application in Qt. Currently I'm using an Elo Touch surface acoustic wave touchmonitor which works well except for one thing: drag performance is way too poor to provide a good user experience. As this is the case for the cursor in X as well as in my application, it seems to be either the fault of X (probably not) or the touchmonitor. Since mobile platforms are able to achieve very high performance in this regard, it seems like it should be possible for vastly more powerful desktop systems. Does anybody have experience with getting good drag performance out of desktop touchmonitors? What hardware have you used? Is X to blame?

    Read the article

  • Using SQLLite transactions I/O Error

    - by james.ingham
    I currently have a client / server setup where the client sends data to the server and then the server saves the data to a SQLite database file. To do this I am using transactions which works fine in windows 7 when I run around 30 clients (each client sending data back between 5 - 30 seconds). When using the same software in Windows XP, I can get/set data multiple times with no problems until I run around 20 clients I start to get Windows Delayed wrote failed errors: This fires an exception on the server: I'm assuming this is either something to do with XP or a hardware issue on the machine i'm running XP. Does anyone have any advice to avoid this? Or if I should just catch the exception and retry saving the data? Thanks

    Read the article

  • What are the Worst Software Project Failures Ever?

    - by Warren P
    Is there a good list of "worst software project failures ever" in the history of software development? For example in Canada a "gun registry" project spent around two billion dollars. (http://en.wikipedia.org/wiki/Gun_registry). This is of course, insane, even if the final product "sort of worked". I have heard of an FBI Case file system which there have been several attempts to rewrite, all of them so far, failures. There is a book on the subject (Software Runaways). There doesn't seem to be be a software "boondoggle" list or "fiasco" list on Wikipedia that I can see. (Update: Therac-25 would be the 'winner' of this question, except that I was internally thinking more of Software projects that had as their deliverable, mainly software, as opposed to firmware projects like Therac-25, where the hardware and firmware together are capable of killing people. In terms of pure software monetary debacles, which was my intended question, there are several contenders.)

    Read the article

  • Corporate Wiki Organization - Technical Documentation

    - by Dave Jarvis
    Corporations have documents describing various aspects of their technical systems, including: Custom Applications Custom Development Frameworks Third Party Applications Accounting Bug Tracking Network Management How To Guides User Manuals Software Tools Web Browsers Development IDEs Graphics GIMP xv Text Editing File Transfer ncFTP WinSCP Hardware Servers Web Database Exchange File Network Devices Printers What other items are missing from the list, and how would you organize it? (For example, would Software Tools make more sense under Third Party Applications?) Try to think about where you, a software developer, would expect to find the information by browsing (not searching). A few constraints: The structure should not go beyond three levels deep. Avoid the word "and" in favour of two different categories. Keep the structure general: it should appy as broadly as possible. Target audience is primarily technical.

    Read the article

  • Operating system for visualization app in 6 monitors

    - by Federico
    Hi. I have to plan the development for an application with these major requirements: Show different graphical data and animations in 6 monitors, in fullscreen mode. The hardware to be used is a PC with 3 NVIDIA GeForce 9800 GX2 cards. I have some expertise working with OpenGL, but never with more than one monitor. I have the (some limited) freedom to choose an operating system for the application. My options are: Windows XP, Windows Vista, Windows 7, Ubuntu 8.04/10.04. I would like to know, if you have some expertise or knowledge in the multi-monitor application development field, what is the recommended operating system for this kind of application? And, do I need any software other than the operating system and the NVIDIA drivers to be able to use the 6 monitors in fullscreen, showing different things in each one of them? Any comment/answer will be really appreciated. Thanks in advance! Federico

    Read the article

  • begin...end VS braces {...} VS indentation grouping

    - by Halst
    Hi, everyone. I don't want to fuel any holy-wars here, but I need to ask your opinion. Right now I'm in process of designing a Hardware Description Language as a project in my university. I decided to take VHDL language and just add some syntax-sugar 'coz VHDL is rather obese in syntax. I decided to use indentation to group blocks of code (like in Python), and I'm strongly criticized for that. Originally Begin...End; grouping is used in VHDL language. I have no clue what are cons and pros of these 3 types of grouping, the only thing I know is that I like Python style and I don't understand if it's usage could be erroneous or something? What do you think? What do you like? (hope that I can get some feedback from people who extensively used different languages with different code-grouping syntax, like Pasca, Ada, Delphi, C, C++, C#, Java, Python)

    Read the article

  • How to write to the OpenGL Depth Buffer

    - by Mikepote
    I'm trying to implement an old-school technique where a rendered background image AND preset depth information is used to occlude other objects in the scene. So for instance if you have a picture of a room with some wires hanging from the ceiling in the foreground, these are given a shallow depth value in the depthmap, and when rendered correctly, allows the character to walk "behind" the wires but in front of other objects in the room. So far I've tried creating a depth texture using: glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); Then just binding it to a quad and rendering that over the screen, but it doesnt write the depth values from the texture. I've also tried: glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); But this slows down my framerate to about 0.25 fps... I know that you can do this in a pixelshader by setting the gl_fragDepth to a value from the texture, but I wanted to know if I could achieve this with non-pixelshader enabled hardware?

    Read the article

  • How would I best address this object type heirachy? Some kind of enum heirarchy?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing object heirarchies in an ORM approach (in this instance, using Entity Framework 4). I'm working through some docs on EF4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Defend zero-based arrays

    - by DrJokepu
    A question asked here recently reminded me of a debate I had not long ago with a fellow programmer. Basically he argued that zero-based arrays should be replaced by one-based arrays since arrays being zero based is an implementation detail that originates from the way arrays and pointers and computer hardware work, but these sort of stuff should not be reflected in higher level languages. Now I am not really good at debating so I couldn't really offer any good reasons to stick with zero-based arrays other than they sort of feel like more appropriate. I am really interested in the opinions of other developers, so I sort of challenge you to come up with reasons to stick with zero-based arrays!

    Read the article

  • How do I program an AVR Raven with Linux or a Mac?

    - by Andrew McGregor
    This tutorial for programming these starts with programming the Ravens and Jackdaw with a Windows box. Can I do those initial steps with avrdude on a Linux or OS X machine instead? If so, how? Is there any risk of bricking the hardware if I just try? I have a USB JTAG ICE MKii clone, which is supposed to work for this. I'm totally new to AVR, but very experienced with C/C++ programming on Linux or OS X, up to and including kernel programming... so any hint at all would be appreciated, I can read man pages, but only if I know what I'm looking for.

    Read the article

  • Occasional InterruptedException when quitting a Swing application

    - by Joonas Pulakka
    I recently updated my computer to a more powerful one, with a quad-core hyperthreading processor (i7), thus plenty of real concurrency available. Now I'm occasionally getting the following error when quitting (System.exit(0)) an application (with a Swing GUI) that I'm developing: Exception while removing reference: java.lang.InterruptedException java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at sun.java2d.Disposer.run(Disposer.java:125) at java.lang.Thread.run(Thread.java:619) Well, given that it started to happen with a more concurrency-capable hardware, and it has to do with threads, and it happens occasionally, it's obviously some kind of timing thing. But the problem is that the stack trace is so short. All I have is the listing above. It doesn't include my own code at all, so it's somewhat hard to guess where the bug is. Has anyone experienced something like this before? Any ideas how to start solving it?

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >