Search Results

Search found 15241 results on 610 pages for 'solaris operating environment'.

Page 42/610 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Physics Loop in a NodeJS/Socket.IO Environment

    - by Thomas Mosey
    I'm developing a 2D HTML5 Canvas Game, and I am trying to think of the most efficient way to implement a Physics Loop on the server-end of things, running NodeJS and Socket.IO. The only method I've thought of is using setTimeout/Interval, is there any better way? Any examples would be appreciated. EDIT: The Game is a top-down Game, like Zelda and older Pokemon Games. Most of the physics done in the loop will be simple intersects.

    Read the article

  • Why is Android VM-based? [closed]

    - by adib
    By about 2004, it was clear that ARM is the clear winner for mobile CPUs, beating out MIPS, SH3, and DragonBall. PocketPC (Windows Mobile) applications was natively-compiled (at least most of them - except for .NET compact and its competitors). Likewise, Apple's iOS (named iPhone OS at the time) prefers natively-compiled applications. Then why Android chose a virtual machine based system stack? (the Dalvik VM). Wouldn't it be simpler to just compile applications down to ARM code using GCJ or something? Is the decision influenced by the J2ME-way of doing things, or was just because it's "cool"? Perhaps like most things Java, the culture that prefers multiple levels of indirection and abstractions, they just added another layer of abstraction for "just in case"?

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • How to determine the right amount of up front design?

    - by Gian
    Software developers occasionally are called upon to write fairly complex bits of software under tight deadlines. Often, it seems like the quickest thing to do is to simply start coding, and solve the problems as they arise. However, this approach can come back to bite you—often costing time or money in the long run! How do we determine the right amount of up front design work? If your work environment actively discourages you from thinking about things up front, how do you handle that? How can we manage risk if we eschew up-front thinking (by choice or under duress) and figure out the problems as they arise? Does the amount of up front design depend entirely on the size or complexity of the task, or is it based on something else?

    Read the article

  • Update to Alert on Java Runtime Environment (JRE) for EBS end-users on Windows

    - by user793553
    To ensure that Java users remain on a secure version, Windows systems that rely on auto-update will be auto-updated from JRE 6 to JRE 7. Until E-Business Suite is certified with JRE 7, EBS users should not rely on the Windows auto-update mechanism for their client machines and should manually keep the JRE up to date with the latest version of JRE 6 until further notice.   Click here for more details and for instructions on how to get the latest version of JRE 6  

    Read the article

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • How to change the default desktop environment?

    - by Kshitiz Sharma
    I wanted to try out different desktop environments so I installed XFCE, KDE, GNOME, etc on top of Unity in Ubuntu 12.04 After a while I decided that I didn't like those other DEs and would stick to Unity. So I changed my default DE to Unity by configuring gdm. > sudo dpkg-reconfigure gdm Now I am able to choose my DE at login time and all of DEs are working properly. But the strange thing is that my boot up screen says 'lubuntu', my login screen is KDE, and my desktop is Unity. How and why is this happening? Why didn't my gdm configuration have any effect? Does login and boot up screen need to be configured separately from the DE? There are other similar questions here but they are not the same as this one. I do not want to remove the other enviroments I'm quite happy with having a list of DEs to select from. I want to know how to set proper defaults.

    Read the article

  • Unity environment way too slow in Ubuntu 13.10

    - by Santiago
    Unity and its apps open too slowly whenever I open one. It takes a while for them to appear completely. Everything works properly when the window is already open. The biggest problem is with the dash: it's SO SLOW when I'm looking for an app although I have removed some lenses. What should I do or what can I do? These issues only occur with Ubuntu 13.04 and 13.10 whereas 12.04 works AMAZNGLY but I have issues when updating a package or installing a new one, that's why I don't opt for that one. Specifications: RAM: 2GB, Processor: Intel® Atom™ CPU N2600 @ 1.60GHz × 4, Graphics card: Gallium 0.4 on llvmpipe (LLVM 3.3, 128 bits)

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • add platform to ubuntu

    - by Med
    I am new in ubuntu (come from Win7), i want to know how can i add evironement variable in ubuntu, because the platform where i use exaggerate "To compile and run SCA composites with OW2 FraSCAti, you also have to set the FRASCATI_HOME system environment variable. FRASCATI_HOME has to point to the directory where the OW2 FraSCAti runtime distribution was extracted". And how can i add it to my path "For conveniance, you can add FRASCATI_HOME/bin to your PATH variable to get the frascati command available in the PATH". Please i'am new, could you explaine me what i do step by step..

    Read the article

  • Would you hire a computer scientist which refuses to use computers? [closed]

    - by blueberryfields
    Imagine that you're interviewing a brilliant CS grad. He's just finished school, has very high grades, and has been performing very well on the interview so far. You reach a point near the end, where you're starting to speak about terms of employment, salary, etc.., and you're trying to show off the environment he'll be working in. When you mention that programmers at your company have systems with two monitors, the interviewee stops you and informs you that he won't need a computer. He only ever writes code by hand, in a note-book, and relies on his phone for sending/receiving email. This is not something he's willing to budge on. Would you still hire him? How good would he have to be for you to hire him? What would you hire him to do, if you do hire him? (the student is modelling himself on E.W. Djikstra)

    Read the article

  • File system with chained clusters

    - by Maki Maki
    I'm trying to create school file system with partitions on disks, every partition has its cluster for her representation. typedef unsigned long ClusterNo; const unsigned long ClusterSize = 2048; int x, y ;//x ,y are entries for two-chained lists of clusters if (endOfFile<maxsize// { ... { pointer = KernelFS::searchFreeCluster(partitionPointer->letter) " How can I initialize the beginning for two clusters, their pointers to be 32 bits?

    Read the article

  • Environment font size is too small

    - by Adobe
    So I've chosen a font by System Settings - Application Appearance - Fonts And there I've adjusted all fonts to be of 14th size. And also checked "Use my KDE fonts..." in Gtk+ appearance. I've also did the same using kdesudo systemsettings But still some fonts are tiny! It's not the 14th size! Edit 2: I thought it might be one of Gnome font settings. So I've increased all fonts in gnome-tweak-tool sudo gnome-tweak-tool gconf-editor sudo gconf-editor No help! Edit: Ubuntu tweak also gives no help (note the tiny fonts!): Edit: It looks like the problem is with gtk3: when I compile emacs 24.0.92 with gtk3 - i get small menu fonts. When I do the same with a default gtk2 - everything is all right.

    Read the article

  • Can't login to my XFCE environment from lxdm

    - by Noob
    I installed lxdm as my display manager because I heard its very lightweight. Now after installing it when I select my Desktop as Xfce session I am not able to login. Basically what happens is it waits for sometime and than reload the same login page. On the other hand if I select lxde as my Desktop I am able to login. So, what am I doing wrong here. The other problem I am facing with lxdm is it doesn't show my default user account. I have to select more and enter both my username and password to login into any session.

    Read the article

  • how to read the password from variable?

    - by Viswa
    I am trying to move my file to another system which is located in some other place, with this command: rsync -avrz src destination It works fine. But what I need is to put this command in shell script and run it like: #! /bin/sh rsync -avrz srcfilelocation destination When it runs, it asks for the destination system password. I know that password and give it manually. Now I have decided to assign the password to an environment variable, like pswd="destination system password". I need my shell script to read the password from this variable. How can I write a script to do this?

    Read the article

  • Why ~/.bash_profile is not getting sourced when opening a terminal in Ubuntu 11.04?

    - by Viriato
    Problem I have an Ubuntu 11.04 Virtual Machine and I wanted to set up my Java development environment. I did as follows sudo apt-get install openjdk-6-jdk Added the following entries to ~/.bash_profile export JAVA_HOME=/usr/lib/jvm/java-6-openjdk export PATH=$PATH:$JAVA_HOME/bin Save the changes and exit Open up a terminal again and typed the following echo $JAVA_HOME (blank) echo $PATH (displayed, but not the JAVA_HOME value) Nothing happened, like if the export of JAVA_HOME and it's addition to the PATH were never done. Solution I had to go to ~/.bashrc and add the following entry towards the end of file #Source bash_profile to set JAVA_HOME and add it to the PATH because for some reason is not being picked up . ~/.bash_profile Questions Why did I have to do that? I thought bash_profile, bash_login or profile in absence of those two get executed first before bashrc. Was in this case my terminal a non-login shell? If so, why when doing su after the terminal and putting the password it did not execute profile where I had also set the exports mentioned above?

    Read the article

  • Why was Tannenbaum wrong in the Tannenbaum-Torvalds debates?

    - by Robz
    I was recently assigned reading from the Tannenbaum-Torvalds debates in my OS class. In the debates, Tannenbaum makes some predictions: Microkernels are the future x86 will die out and RISC architectures will dominate the market (5 years from then) everyone will be running a free GNU OS I was a 1 year old when the debates happened, so I lack historical intuition. Why have these predictions not panned out? It seems to me, that from Tannenbaum's perspective, they're pretty reasonable predictions of the future. What happened so that they didn't come to pass?

    Read the article

  • What are some advantages / disadvantages to working on a remote development machine?

    - by robertpateii
    At home I have a fast rig with my dev environment running in virtual box. that works great, but at work I have a so-so laptop that can barely push visual studio express, outlook, and a dozen chrome windows at the same time. So I can either ask for a dedicated desktop to do development on, or I can ask for a slice on an existing server from IT and remote into it. Setup-wise, the remote option is faster and cheaper. But I don't know its affect on production in the long term. I've done small amounts of work through a remote connection, but never extended development. Do you have experience with this? What are some of the ads/disads to it? Did it make you less productive?

    Read the article

  • Handling (many) multiple projects in Git in an enterprise environment

    - by Michael K
    One of the advantages of older version control systems such as CVS and SVN in enterprise development is that anyone can connect to source control and see all the projects that the company has. This can make it easier to get a high level view of what kid of development is happening outside your sprint and also keeps everything in one place and easy to find. However, distributed version control systems (Git, specifically) use the repository as their base unit. They work best with one project (or several closely related projects) per repository. This makes repository management more difficult in most enterprise environments where it is not unusual to have more than 25-50 projects to support. As far as I have been able to determine, you have to keep a list somewhere else of all the repos you have. There is software available, like GitHub, that help, but that still is an extra step beyond a single connection string and listing the contents of the repository. What is the best way to deal with the complexity of multiple repositories?

    Read the article

  • Is it correct to refer to a performing programming assignments as a "computer labs"?

    - by Nick Rosencrantz
    Can we say that developing an algorithm is a "laboration"? Are these "labs"? At engineering performing an exercise or an assignement is refered to as "labs" but are those "labs" when in fact it is mainly software problem solving pretty much like numeric methods in math which are not "labs". When studying engineering such as electrical engineering or physics you might do a "laser lab" or a "chemical laboration" if you study chemical engineering for instance. How do you define "lab"? Just performing something experimental? This is sort of double meaning also the physical environment at universities which we call "computer labs" that in fact are just rooms with computers.

    Read the article

  • Most common Apache and PHP configuration for portable Web Applications

    - by Mahan
    I always create web application using PHP but I always distribute and deploy my works to different kinds of server platforms and web server configurations. Thus I always encounter problems in deployment because some features are enabled and others are disabled. And my question, is there a standard web server configuration that is commonly used by most of web servers worldwide? covering the aspects of reliability, security and maintainability?

    Read the article

  • Why was Tanenbaum wrong in the Tanenbaum-Torvalds debates?

    - by Robz
    I was recently assigned reading from the Tanenbaum-Torvalds debates in my OS class. In the debates, Tanenbaum makes some predictions: Microkernels are the future x86 will die out and RISC architectures will dominate the market (5 years from then) everyone will be running a free GNU OS I was a one year old when the debates happened, so I lack historical intuition. Why have these predictions not panned out? It seems to me, that from Tanenbaum's perspective, they're pretty reasonable predictions of the future. What happened so that they didn't come to pass?

    Read the article

  • System Requirement Checking

    - by gl3829
    I am working on a game and want to strengthen its requirement checking to ensure that it can run successfully. Therefore, I am looking for information on what is useful to check before starting the game. As a simple example, Why check for a specific amount of memory? Should I as a game developer ensure a minimum amount of memory? I feel this information is usually skipped in many books and resources but is critical to be able to deliver a game that will run on many machines. I would appreciate if you answered with what you check in the system, why you check it and if you have a good resource about it, please include. Just to be a bit more specific, I'm developing in Windows.

    Read the article

  • What tales of horror have you regarding "whitespace" errors?

    - by reechard
    I'm looking for tales of woe such as companies, websites and products failing, religious flamewars, data loss. Examples: text editor settings conflicts indent at 4 tabs at 8 vs. indent at 2 tabs at 4 windows line endings vs. unix line endings, text vs. binary files, source code control related terms: "line feed" "carriage return" "horizontal tab" "mono spacing" "unix line endings" "version control" "diff" "merge" "ftp"

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >