Search Results

Search found 7580 results on 304 pages for 'coordinate systems'.

Page 216/304 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • Localized tables and Entity Framework

    - by Pyttroll
    Hi all, I have a scenario where I need to localized values of objects in my database. Let's say you have an application that can create animals, if the user is english the value of the "Name" property of an animal would be entered as "Cat" in the UI whereas it would be entered as "Chat" in french. The animal culture table would contain 2 records pointing to the same animal in the parent table. When reading values back, if the value of "Name" does not exist in the user culture the default value (value the object was originally created with) would be used. The following diagrams demonstrate how the data is stored in SQL: I'm trying to map this schema to an object model using the Entity Framework, I'm a bit confused as to what the best way to approach the problem. Is EF appropriate for this? Should I used EF4? This EF model will be used by .NET RIA Services. Thanks, Pierre-Yves Troel Ayuda Media Systems http://www.ayudasystems.com

    Read the article

  • Can I use the newer versions of Visual Studio to do "old" things?

    - by Ptah- Opener of the Mouth
    I have several ATL/COM-based DLLs that I've been using Visual C++ 6.0 on. I require a couple of "old" things out of the generated DLLs: (1) They must be compatible with projects developed in Visual Basic 6.0 (the old VB6, emphatically not VB.Net). (2) They must be compatible with old operating systems - minimum Windows 98 SE. To be clear, I mean they must run on such OSes, not that I would have to be able to develop them on a machine running such an OS. I am sick of Visual Studio 6.0. Converting to Dot Net (or any other major change like that) is out of the question at the current time, so I must continue to use VB6. But can I switch to the newer Visual Studio's C++, with a minimum of effort (i.e. little if any required recoding)? If so, are there any "gotchas" I should watch out for? Thanks.

    Read the article

  • What makes COBOL such a hated language?

    - by cable
    I'm rather young and so I haven't experienced the times when COBOL was still in use and even going mainstream, maybe you can help me out. Everywhere I go and surf I: Am getting told about how horrible COBOL is See sites making fun of COBOL Hear that I should be happy that I don't have to use COBOL, from older programmers See others make anti-COBOL-ist jokes I don't know much about COBOL, except that it is still used everywhere as many old systems haven't got refactored and that it is even still developed and managed. I have also looked at some COBOL code examples but it seems like they are as rare and expensive as unicorn blood. If you close this question or give me sarcastic, irrelevant answers I'll have to learn COBOL to find it out the hard way. I'm serious.

    Read the article

  • capistrano put() and upload() both failing

    - by Kyle
    With capistrano, I am deploying a Rails application from Mac OS X 10.5 to CentOS 5.2 Note that deploy.rb and the server environment have not changed in over a year. There is a task within our deploy.rb file called upload: put(File.read( file ),"#{shared_path}/#{filename}", :via => :scp) This fails each and every time with the following exception: No such file or directory - /srv/ourapp/releases/20100104194410/config/database.yml My local copy of config/database.yml is failing to upload properly. I've verified it's not our internet connection, as this happens on three different connections and two different systems. I've also tried to swap out put() for upload() but get the same result; also, dropping :via = :scp, and/or trying to force :sftp instead similarly fails. Relevant info: $ cap -V Capistrano v2.5.10 $ ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin9.6.0]

    Read the article

  • What's the coolest machine you've ever worked on?

    - by mxg
    What's the most exotic, coolest, unique, or interesting machine you've worked on? Most of us work on machines with x86 architectures using some Windows or Linux variant. I'm sure there are those of you out there who are working on or have worked on machines with experimental architecures, or operating systems. Maybe you worked on a machine that has some sigificance in the history of computing. I'd be interested to hear about it. I'm sure others reading SO will as well. EDIT: I appreciate all of you who took some time to talk about their experiences with interesting or unusual machines. I enjoyed reading your answers. Although it wasn't my intent to get nostalgic, I see that theme amongst the responses.

    Read the article

  • Recommendations for an in memory database vs thread safe data structures

    - by yx
    TLDR: What are the pros/cons of using an in-memory database vs locks and concurrent data structures? I am currently working on an application that has many (possibly remote) displays that collect live data from multiple data sources and renders them on screen in real time. One of the other developers have suggested the use of an in memory database instead of doing it the standard way our other systems behaves, which is to use concurrent hashmaps, queues, arrays, and other objects to store the graphical objects and handling them safely with locks if necessary. His argument is that the DB will lessen the need to worry about concurrency since it will handle read/write locks automatically, and also the DB will offer an easier way to structure the data into as many tables as we need instead of having create hashmaps of hashmaps of lists, etc and keeping track of it all. I do not have much DB experience myself so I am asking fellow SO users what experiences they have had and what are the pros & cons of inserting the DB into the system?

    Read the article

  • malloc()/free() behavior differs between Debian and Redhat

    - by StasM
    I have a Linux app (written in C) that allocates large amount of memory (~60M) in small chunks through malloc() and then frees it (the app continues to run then). This memory is not returned to the OS but stays allocated to the process. Now, the interesting thing here is that this behavior happens only on RedHat Linux and clones (Fedora, Centos, etc.) while on Debian systems the memory is returned back to the OS after all freeing is done. Any ideas why there could be the difference between the two or which setting may control it, etc.?

    Read the article

  • Do I assign different or the same class id to 32-bit and 64-bit versions of the same IFilter?

    - by sharptooth
    I've implemented my own Microsoft Search IFilter. I need two versions of it - 32-bit and 64-bit for deploying them on corresponding systems. In case of IFilters for any file extension I can only register one IFilter class id. Which means I can only use one version on any system. So having two class ids seems useless - it only makes the automatic installer more complex. Do I reuse the same COM class id for both or do I use different class ids?

    Read the article

  • Is "non breaking change" a common term in revision control?

    - by mafutrct
    Non breaking change is a term used to describe minor contributions which are supposed to not break anything and is abbreviated as NBC. Typical example include formatting a source file or adding a comment - it really, really should not break the build (of course there are always exceptional cases). Is this a common term in revision control talk? I'm especially asking those familiar with RC systems. I use "NBC" on occasion but I never heard anyone else using it so I wondered... (btw: Don't trust wikipedia as a source on this)

    Read the article

  • strategies for learning complex software packages

    - by Tom
    I am a fairly novice Java programmer and I am currently working on a project to extend a piece of software that has been developed over a few years. So it has pretty big code base and the previous developers knew it well, so extending it is not going to be easy without a thorough understanding of the structure and function. 1) I had begun by trying to tackle small parts of the system and document them with mindmap. (particularly I am trying to document the interactions with external systems) 2) I have the book "code complete", which I am working through. 3) I have pointed some tools like "tattletale" at the code to get some diagrams of dependency relationships. What other strategies should I employ, should I focus on one particular aspect?

    Read the article

  • Can we represent bit fields in JSON/BSON?

    - by zubair
    We have a dozen simulators talking to each other on UDP. The interface definition is managed in a database. The simulators are written using different languages; mostly C++, some in Java and C#. Currently, when systems engineer makes changes in the interface definition database, simulator developers manually update the communication data structures in their code. The data is mostly 2-5 bytes with bit fields for each signal. What I want to do is to generate one file from interface definition database describing byte and bit field definitions and let each developer add it to his simulator code with minimal fuss. I looked at JSON/BSON but couldn't find a way to represent bit fields in it. Thanks Zubair

    Read the article

  • A web framework where AJAX was not an after thought

    - by Pirate for Profit
    AJAX is a pain in the ass because it essentially means you'll have to write two sets of similarish code: one for browsers with JavaScript enabled and those without. Not only this, but you have to connect JavaScript events to hook into your models and display the results. And if all that weren't bad enough, you need to send an address change with the request, otherwise the user won't be able to "click back" correctly (if confused look at what happens to the address bar when you click links in GMail). We're searching for something that had the foresight and design goals with all these concerns in mind. Performance and security are also obvious major concerns. We love config-based systems as well, where you don't have to write a lot of code you just drop it into an easily read config format. It's like asking for the holy grail right?

    Read the article

  • Which sector in IT industry best suites my career needs?

    - by Shailesh Tainwala
    I am a student of software engineering and will be graduating in a years time. I want to get a few years of work experience before considering further studies. I like the idea of working on projects developing end-to-end systems for medium/large enterprises in different domains. My area of special interest is AI and data-mining. ERP and MIS are terms that closely resemble what I am driving at. What type of companies should I be ideally looking at?

    Read the article

  • How to write a shell in Python

    - by panzi
    I've written a small console application that can perform certain tasks. The user interface is similar to things like version control systems or yum etc. So basically you can think of it as a domain specific language. Now I'd like to write a (bash like) shell that can execute and auto-complete this language and has a command history (so I do not have to load and save the quite large xml files on each command). In a nutshell I want something like ipython but not for executing python code but my own DSL. Are there any libraries that help me doing this? I see that there is a readline and rlcompleter module in python but its documentation seems to indicate that this is only for use with the python shell itself, or did I miss something there?

    Read the article

  • How to efficiently manage files on a filesystem in Java?

    - by Tuukka Mustonen
    I am creating a few JAX-WS endpoints, for which I want to save the received and sent messages for later inspection. To do this, I am planning to save the messages (XML files) into filesystem, in some sensible hierarchy. There will be hundreds, even thousands of files per day. I also need to store metadata for each file. I am considering to put the metadata (just a couple of fields) into database table, but the XML file content itself into files in a filesystem in order not to bloat the database with content data (that is seldomly read). Is there some simple library that helps me in saving, loading, deleting etc. the files? It's not that tricky to implement it myself, but I wonder if there are existing solutions? Just a simple library that already provides easy access to filesystem (preferrably over different operating systems). Or do I even need that, should I just go with raw/custom Java?

    Read the article

  • Portable way of finding total disk size in Java (pre java 6)

    - by Wouter Lievens
    I need to find the total size of a drive in Java 5 (or 1.5, whatever). I know that Java 6 has a new method in java.io.File, but I need it to work in Java 5. Apache Commons IO has org.apache.commons.io.FileSystemUtils to provide the free disk space, but not the total disk space. I realize this is OS dependant and will need to depend on messy command line invocation. I'm fine with it working on "most" systems, i.e. windows/linux/macosx. Preferably I'd like to use an existing library rather than write my own variants. Any thoughts? Thanks.

    Read the article

  • making use of c++ to speed up php

    - by Ygam
    I saw this post on Sitepoint quoting a statement by Rasmus Lerdorf which goes (according to Sitepoint) as follows: "How can you make PHP fast? Well, you can’t" was his quick answer. PHP is simply not fast enough to scale to Yahoo levels. PHP was never meant for those sorts of tasks. "Any script based language is simply not fast enough". To get the speed that is necessary for truly massive web systems you have to use compiled C++ extensions to get true, scaleable architecture. That is what Yahoo does and so do many other PHP heavyweights. Intrigued by the statement (not to mention the fact that up to now, all I was doing in PHP was small database-based apps), I was wondering how I could "use compiled C++ extensions" with PHP. Any ideas or resources?

    Read the article

  • Embedded Linux: Memory Fragmentation

    - by waffleman
    In many embedded systems, memory fragmentation is a concern. Particularly, for software that runs for long periods of time (months, years, etc...). For many projects, the solution is to simply not use dynamic memory allocation such as malloc/free and new/delete. Global memory is used whenever possible and memory pools for types that are frequently allocated and deallocated are good strategies to avoid dynamic memory management use. In Embedded Linux how is this addressed? I see many libraries use dynamic memory. Is there mechanism that the OS uses to prevent memory fragmentation? Does it clean up the heap periodically? Or should one avoid using these libraries in an embedded environment?

    Read the article

  • Service Layer Patter - Could we avoid the service layer on a specific case?

    - by lidermin
    Hi, we are trying to implement an application using the Service Layer Pattern cause our application needs to connect to other multiple applications too, and googling on the web, we found this link of a demostrative graphic for the "right" way of apply the pattern: martinfowler.com - Service Layer Pattern But now we have a question: what if our system needs to implement some business logic, only for our application (like some maintenance data for the system itself) that we don't need to share with other systems. Based on this graphic: As it seems, it will be unnecesary to implement a service layer just for that; it will be more practical to avoid the service layer, and just go from User Interface to the Business Layer (for example). What should be the right way in this case to implement the Service Layer Pattern? What do you suggest us for a scenario like the one I told you? Thanks in advance.

    Read the article

  • Service Layer Patter - Could we avoid the service layer on a specific case?

    - by lidermin
    Hi, we are trying to implement an application using the Service Layer Pattern cause our application needs to connect to other multiple applications too, and googling on the web, we found this link of a demostrative graphic for the "right" way of apply the pattern: martinfowler.com - Service Layer Pattern But now we have a question: what if our system needs to implement some business logic, only for our application (like some maintenance data for the system itself) that we don't need to share with other systems. Based on this graphic: As it seems, it will be unnecesary to implement a service layer just for that; it will be more practical to avoid the service layer, and just go from User Interface to the Business Layer (for example). What should be the right way in this case to implement the Service Layer Pattern? What do you suggest us for a scenario like the one I told you? Thanks in advance.

    Read the article

  • Are there programming languages taht rely on non-latin alphabets?

    - by Jaxsun
    Every programming language I have ever seen has been based on the Latin alphabet, this is not surprising considering I live in Canada... But it only really makes sense that there would be programming languages based on other alphabets, or else bright computer scientists across the world would have to learn a new alphabet to go on in the field. I know for a fact that people in countries dominated by other alphabets develop languages based off the Latin alphabet (eg. Ruby from Japan), but just how common is it for programming languages to be based off of other alphabets like Arabic, or Cyrillic, or even writing systems which are not alphabetic but rather logographic in nature such as Japanese Kanji? Also are any of these languages in active widespread use, or are they mainly used as teaching tools? This is something that has bugged me since I started programming, and I have never run across someone who could think of a real answer.

    Read the article

  • Synchronising scripts / db / files from dev system to web server

    - by Spoonface
    I work as a freelance web dev, and up until now have been ftping my scripts / databases / static files to my web server manually, but I'm finding that is too error prone. So I'm looking for an app to automate uploading new and updated scripts / files / databases / etc. I know a lot of independent devs use WinSCP or Unison, but I don't think those apps can synch databases. Does anyone have any other suggestions? It doesn't need to be anything overly feature rich as I'm not working within a team or across multiple operating systems or anything like that. I can purchase any reasonably priced license if necesary. My work is primarily for PHP / MySQL / Apache on a Windows system, and then uploaded to a Linux / Apache server. thanks for your time!

    Read the article

  • What is the best way to interoperably serialize a message?

    - by iwein
    I'm considering message serialization support for spring-integration. This would be useful for various wire level transports to implement guaranteed delivery, but also to allow interoperability with other messaging systems (e.g. through AMQP). The fundamental problem that arises is that a message containing Java object in it's payload and headers should be converted to a byte[] and/or written to a stream. Java's own serialization is clearly not going to cut it because that is not interoperable. My preference would be to create an interface that allows the user to implement the needed logic for all Objects that take part in serialization. Is this a sensible idea and what would the interface look like? Is there a standard interoperable way to serialize Objects that would make sense in this context?

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >