Search Results

Search found 7580 results on 304 pages for 'coordinate systems'.

Page 18/304 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Why is Android VM-based? [closed]

    - by adib
    By about 2004, it was clear that ARM is the clear winner for mobile CPUs, beating out MIPS, SH3, and DragonBall. PocketPC (Windows Mobile) applications was natively-compiled (at least most of them - except for .NET compact and its competitors). Likewise, Apple's iOS (named iPhone OS at the time) prefers natively-compiled applications. Then why Android chose a virtual machine based system stack? (the Dalvik VM). Wouldn't it be simpler to just compile applications down to ARM code using GCJ or something? Is the decision influenced by the J2ME-way of doing things, or was just because it's "cool"? Perhaps like most things Java, the culture that prefers multiple levels of indirection and abstractions, they just added another layer of abstraction for "just in case"?

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • File system with chained clusters

    - by Maki Maki
    I'm trying to create school file system with partitions on disks, every partition has its cluster for her representation. typedef unsigned long ClusterNo; const unsigned long ClusterSize = 2048; int x, y ;//x ,y are entries for two-chained lists of clusters if (endOfFile<maxsize// { ... { pointer = KernelFS::searchFreeCluster(partitionPointer->letter) " How can I initialize the beginning for two clusters, their pointers to be 32 bits?

    Read the article

  • understanding computers [closed]

    - by Ashwin
    Possible Duplicate: Good resources to understand how a program interacts with machine hardware I don't know if this is the correct StackExchange site to ask this question. But I could not find any other. I want to understand how a computer works from the software level to the internal structure. For example what happens when I press a button on keyboard. The OS interprets it and then what changes happen in the flip-flops. How is an operating system written? If it is written using some programming language, then how is that interpreter written. At some point it has to come down to the hardware, right? I know to program in c, c++ and java. But after all these years I am still not sure about what is happening inside. I would be grateful to anyone who points me to to a link or a video that explains this to the deep.

    Read the article

  • New: ZFS Storage Appliance Videos

    - by Roxana Babiciu
    Check out part one of a new video series for ZFS Storage Appliance. In video #1, you’ll learn about the advantages built into Oracle’s ZS3 Storage Appliance that come from the unique position that Oracle holds in the market. In video #2, you’ll learn how best to monitor large ZS3 installations as well as the use of Enterprise Manager as a complement to dtrace analytics at the ZFS Storage Appliance device level.

    Read the article

  • How to start embedded development for developing a handheld game console?

    - by Quakeboy
    I work as a iPhone app developer now, so I know a bit of c, c++ and objective c. Also have fiddled with Java and many other. All of them have been just high level application/games development. My final goal is to make a handheld game console. More like a home made NES/SNES handheld console or even an Atari. I have found out about RaspberryPI and Arduino. But I need more information about how to approach this. 1) How Do I learn to pick the best board/cpu/controller/GPU/LCD screen/LCD controller etc? 2) Will learning to make a NES emulator first help me understand this field? If so are there any tutorials?

    Read the article

  • Matlab: Why is '1' + 1 == 50? [migrated]

    - by phi
    Matlab has weak dynamic typing, which is what causes this weird behaviour. What I do not understand is what exactly happens, as this result really surprises me. Edit: To clarify, what I'm describing is clearly a result of Matlab storing chars in ASCII-format, which was also mentioned in the comments. I'm more interested in the way Matlab handles its variables, and specifically, how and when it assigns a type/tag to the values. Thanks. '1' is a 1-by-1 matrix of chars in matlab and '123' is a 1-by-3 matrix of chars. As expected, 1 returns a 1-by-1 double. Now if I enter '1' + 1 I get 50 as a 1-by-1 double, and if I enter '123' + 1 I get a 1-by-3 double [ 50 51 52 ] Furthermore, if I type 'a' + 1 the result is 98 in a 1-by-1 double. I assume this has to do with how Matlab stores char-variables in ascii form, but how exactly is it handling these? Are the data actually unityped and tagged, or how does it work? Thanks.

    Read the article

  • Exadata - Following up on customer deployments

    - by Carlos M. Orozco -Oracle
    Over the last year or so I've been visiting customers who have had Exadata deployed and have been enjoying the benefits the platform has been providing. Benefits include greater performance, consolidating multiple databases, data compression and time to value improvements. Most often I hear my reports run faster. One hospitality company report times that used to take 3 hrs now run in 12 seconds. Another services company reported all their batch reports taking 11hrs now run in 38 mins. Also reported that their transactions post faster, and batch updates run faster. So what does that mean? For most of them it means that now they have a platform that can handle growth. Most are growing 15% organically, but I've also seen 40% growth thru acquisition. Exadata has been keeping up with the additional data demand by customers leveraging compression and the smart storage features.

    Read the article

  • Why was Tannenbaum wrong in the Tannenbaum-Torvalds debates?

    - by Robz
    I was recently assigned reading from the Tannenbaum-Torvalds debates in my OS class. In the debates, Tannenbaum makes some predictions: Microkernels are the future x86 will die out and RISC architectures will dominate the market (5 years from then) everyone will be running a free GNU OS I was a 1 year old when the debates happened, so I lack historical intuition. Why have these predictions not panned out? It seems to me, that from Tannenbaum's perspective, they're pretty reasonable predictions of the future. What happened so that they didn't come to pass?

    Read the article

  • Any pre-rolled System.IO abstraction libraries out there for Unit Testing?

    - by Binary Worrier
    To test methods that use the file system we need to basically put System.IO behind a set of interfaces that we can then mock, I do this with a DiskIO class and interface. As my DiskIO code gets larger (and the grumblings from the we're unconvinced about this TDD thing crowd here in work get louder), I went looking for a comprehensive open source library that already does this and found . . . nothing. I may be looking in the wrong place or have approached this problem in completely the wrong way. I can't be the only idiot in this position, do these libraries exist, if so where are they? Any you've used and would recommend? Thanks P.S. I'm happy with my current approach i.e. starting with what we need, and adding only when the need arises. Unfortunately the we're unconvinced about this TDD thing crowd remain unconvinced, and think that I can't be right.

    Read the article

  • Why was Tanenbaum wrong in the Tanenbaum-Torvalds debates?

    - by Robz
    I was recently assigned reading from the Tanenbaum-Torvalds debates in my OS class. In the debates, Tanenbaum makes some predictions: Microkernels are the future x86 will die out and RISC architectures will dominate the market (5 years from then) everyone will be running a free GNU OS I was a one year old when the debates happened, so I lack historical intuition. Why have these predictions not panned out? It seems to me, that from Tanenbaum's perspective, they're pretty reasonable predictions of the future. What happened so that they didn't come to pass?

    Read the article

  • What tales of horror have you regarding "whitespace" errors?

    - by reechard
    I'm looking for tales of woe such as companies, websites and products failing, religious flamewars, data loss. Examples: text editor settings conflicts indent at 4 tabs at 8 vs. indent at 2 tabs at 4 windows line endings vs. unix line endings, text vs. binary files, source code control related terms: "line feed" "carriage return" "horizontal tab" "mono spacing" "unix line endings" "version control" "diff" "merge" "ftp"

    Read the article

  • Interface hierarchy design for separate domains

    - by jerzi
    There are businesses and people. People could be liked and businesses could be commented on: class Like class Comment class Person implements iLikeTarget class Business implements iCommentTarget Likes and comments are performed by a user(person) so they are authored: class Like implements iAuthored class Comment implements iAuthored People's like could also be used in their history: class history class Like implements iAuthored, iHistoryTarget Now, a smart developer comes and says each history is attached to a user so history should be authored: interface iHistoryTarget extends iAuthored so it could be removed from class Like: class Person implements iLikeTarget class Business implements iCommentTarget class Like implements iHistoryTarget class Comment implements iAuthored class history interface iHistoryTarget extends iAuthored Here, another smart guy comes with a question: How could I capture the Authored fact in Like and Comment classes? He may knows nothing about history concept in the project. By scalling these kind of functionallities, interfaces may goes to their encapsulated types which cause more type strength, on the other hand explicitness suffered and also code end users will face much pain to process. So here is the question: Should I encapsulate those dependant types to their parent types (interface hierarchies) or not or explicitly repeat each type for every single level of my type system or ...?

    Read the article

  • Issues with time slicing

    - by user12331
    I was trying to see the effect of time slicing. And how it can consume significant amount of time. Actually, I was trying to divide a certain work into number of threads and see the effect. I have a two core processor. So two threads can run in parallel. I was trying to see if I have a work w that is done by 2 threads, and if I have the same work done by t threads with each thread doing w/t of the work. How much does time slicing play a role in it As time slicing is time consuming process, I was expecting that when I do the same work using a two thread process or by a t thread process, the amount of time taken by the t thread process will be more Any suggestions?

    Read the article

  • Does Turing-complete implies possibility of malware? [closed]

    - by Mathematician82
    Is it possible to build an operating system that contains some Turing complete compiler (language?) but is unable to run any malware? Or is there any definition for a malware? This question popped on my mind as I was wondering why Windows has more malware than Linux. If Linux contains a C programming language and its compiler, I think it is possible to write a Linux program that works similarly than Windows viruses. But there are less malware for Linux than for Windows although there is a Wine for Linux to simulate Windows programs.

    Read the article

  • Is there an imperative language with a Haskell-like type system?

    - by Graham Kaemmer
    I've tried to learn Haskell a few times over the last few years, and, maybe because I know mainly scripting languages, the functional-ness of it has always bothered me (monads seem like a huge mess for doing lots of I/O). However, I think it's type system is perfect. Reading through a guide to Haskell's types and typeclasses (like this), I don't really see a reason why they would require a functional language, and furthermore, they seem like they would be perfect for an industry-grade object-oriented language (like Java). This all begs the question: has anyone ever taken Haskell's typing system and made a imperative, OOP language with it? If so, I want to use it.

    Read the article

  • Automatically delete files after they expire

    - by Auxiliary
    I've got this idea for some time and I was wondering if anyone has seen such a feature/app in any operating system and if you haven't, what do you think about it. Where do you think I should begin? The idea is simple. I think we all have those files that are made and probably used for a few days and then are left on our disk and we never delete them or even check to see if we need them again. It'd be cool if you could right click on a file and click on "Expire in.. 3 days" for example. And the file gets deleted after 3 days. I have a great need for this and maybe some people will find it useful. I was thinking of writing a script and use the Nautilus Action project in GNOME for a start.

    Read the article

  • Top Exastack Partner Headlines - 7 Nov

    - by Roxana Babiciu
    R-Style Softlab recommends RS-solutions with Exadata and SuperCluster to improve business performance for their customers. Read more. WingArc, a subsidiary of 1st Holdings, Inc., finds Oracle Exadata’s substantial computing power enabling to their enterprise output management solution (SVF/RD). It helps solve the problem of providing an integrated output platform for high volume businesses. Read more.

    Read the article

  • why was tannenbaum wrong?

    - by Robz
    I was recently assigned reading from the Tannenbaum-Torvalds debates in my OS class. In the debates, Tannenbaum makes several predictions: Microkernels are the future x86 will die out and RISC architectures will dominate the market (5 years from then) everyone will be running a free GNU OS I was a 1 year old when the debates happened, so I lack historical intuition. Why have these not panned out? It seems to me that from Tannenbaum's perspective, they're pretty reasonable predictions of the future. What happened so that they didn't come to pass?

    Read the article

  • What's your main development operating system? Why? [closed]

    - by Anto
    What do you use as your main operating system for developing software (you might use another for testing, gaming, entertainment etc.), and most importantly, why? To speak for myself, I use Ubuntu and Kubuntu (it varies between those two Linux distributions), because it is easy to get stuff done with, has all the development tools I need, is fast, stable and safe. And I think I would never make it without the UNIX utilities anymore.

    Read the article

  • Oracle Virtual Compute Appliance Launch Channel Update Webcast - May 28

    - by Roxana Babiciu
    Join us for an Oracle Virtual Compute Appliance (OVCA) launch update for the channel. This training webcast is a follow up to the OVCA launch on April 16. We will provide a brief product overview of OVCA followed by some great OPN program content, resell criteria, OPN Incentive Program and Demo Equipment Program details. There will be two sessions to accommodate each region. Additionally, don’t miss the latest Oracle Virtual Compute Appliance article packed with great information!

    Read the article

  • Atomic operations on several transactionless external systems

    - by simendsjo
    Say you have an application connecting 3 different external systems. You need to update something in all 3. In case of a failure, you need to roll back the operations. This is not a hard thing to implement, but say operation 3 fails, and when rolling back, the rollback for operation 1 fails! Now the first external system is in an invalid state... I'm thinking a possible solution is to shut down the application and forcing a manual fix of the external system, but then again... It might already have used this information (and perhaps that's why it failed), or we might not have sufficient access. Or it might not even be a good way to rollback the action! Are there some good ways of handling such cases? EDIT: Some application details.. It's a multi user web application. Most of the work is done with scheduled jobs (through Quartz.Net), so most operations is run in it's own thread. Some user actions should trigger jobs that update several systems though. The external systems are somewhat unstable. I Was thinking of changing the application to use the Command and Unit Of Work pattern

    Read the article

  • Tips for Using Multiple Development Systems

    - by Tim Lytle
    When I travel, I don't pack up the desktop I use in the office and take it with me. Maybe I should, but I don't. However, since I'm a contract programmer I like to be able to work wherever I am: I'm mostly thinking of web development here. Version Control goes a long way in keeping sane and working on multiple projects on multiple systems (two or three computers); however, there are the issues of: IDE settings - different display sizes mean the IDE settings can't be completely synced, if at all. Database - if the database is 'external' (even if it's running on the same system, it's not in version control), how do you maintain the needed syncs of structure. Development Stack - Some projects need non-standard extensions, libraries, etc installed. Just an overview of some of the hassle involved with developing on multiple systems. I'll probably end up asking some specific questions, but I thought a CW style tips might reveal some things I would even think to ask about. Update: I guess this would also address tips to make upgrading/replacing your development system easier (something I've just done). So, one tip per answer please, so the 'top' tips are easy to find. How do you make it easier to develop on multiple systems, or to transfer work after upgrading/replaceing a development system?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >