Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 351/2465 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • PHP 5.2 et 5.3 : un bug étrange rend les attaques par Déni de Service enfantine sous Windows et Linux

    PHP : un bug étrange rendrait enfantines les attaques par Déni de Service Il toucherait les versions 5.2 et 5.3 du langage sous Windows et Linux Un bug critique vient d'être découvert dans les branches 5.2 et 5.3 de PHP, le langage de programmation Web parmi les plus populaires. Ce bug est provoqué par certaines valeurs de chiffres à virgule flottante ayant un nombre considérable de décimaux. Leurs calculs ou évaluations en PHP provoqueraient une boucle infinie occupant 100% des ressources du CPU. L'exécution de la ligne de code suivante, ou même son équivalent sans la notation scientifique (avec 324 décimales), provoquerait donc le plantage de la machine, et ce sou...

    Read the article

  • JavaOne Countdown, Are you ready?

    - by Angela Caicedo
    This is a great time of the year!  Not only does the weather start cooling down a bit, but it's time to get ready for JavaOne 2012.  It feels so long since my last JavaOne (last year I missed it because I was on a mom duty), so this year I couldn't be happier to be this close to the action again.  Have you ever been at JavaOne?  There are a million great reasons to love JavaOne, and the most important for me is the atmosphere of the conference: The Java community is there, and Java is in the air! This year we have more than 450 sessions, and there are HOLs (Hands on labs) to get your hands dirty with code.  In addition, there will be very cool demos, an exhibition hall. and a DEMOground.  During the whole time, you will have the opportunity to interact with the speakers, discuss topics and concerns, and even have a drink! Oh yes, I almost forgot, there will be lots of fun even apart from the technology!  For example there will be a Geek Bike Ride, a Thirsty Bear party, and the Appreciation Party with Pearl Jam and Kings of Leon.  How can this get any better! So, are you ready yet?  Have you registered?  If not, just follow this "Register for JavaOne" link and we'll see you there! P.S.  Little known fact: If you are a student you can get your pass for free!!!

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days has been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • When can I publish a software tool written at work?

    - by AlexMA
    I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours? Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? Follow-up 2: Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note.

    Read the article

  • Macbook Pro late 2011 screen brightness issue

    - by buchzumlesen
    The keys to set the screen brightness work properly but everytime I reboot, the screen brightness is reverted to 100%, which is very annoying. I've already tried to add the following lines to /etc/rc.local but with no success (only the keyboard backlight stays off): #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo '1' > /sys/devices/platform/applesmc.768/leds/smc::kbd_backlight/brightness echo '6' > /sys/class/backlight/acpi_video0/brightness rfkill block bluetooth exit 0` This worked for me when I was using Ubuntu 12.04 and also did after the upgrade to 12.10 but then after rebooting the screen brightness always reverted to 100%. Would be nice if anyone knows how to fix this. My device: Macbook Pro 13" Late 2011 Thanks in advance!

    Read the article

  • Should I avoid or embrace asking questions of other developers on the job?

    - by T.K.
    As a CS undergraduate, the people around me are either learning or are paid to teach me, but as a software developer, the people around me have tasks of their own. They aren't paid to teach me, and conversely, I am paid to contribute. When I first started working as a software developer co-op, I was introduced to a huge code base written in a language I had never used before. I had plenty of questions, but didn't want to bother my co-workers with all of them - it wasted their time and hurt my pride. Instead, I spent a lot of time bouncing between IDE and browser, trying to make sense of what had already been written and differentiate between expected behavior and symptoms of bugs. I'd ask my co-workers when I felt that the root of my lack of understanding was an in-house concept that I wouldn't find on the internet, but aside from that, I tried to confine my questions to lunch hours. Naturally, there were occasions where I wasted time trying to understand something in code on the internet that had, at its heart, an in-house concept, but overall, I felt I was productive enough during my first semester, contributing about as much as one could expect and gaining a pretty decent understanding of large parts of the product. I was wondering what senior developers felt about that mindset. Should new developers ask more questions to get to speed faster, or should they do their own research for themselves? I see benefits to both mindsets, and anticipate a large variety of responses, but I figure new developers might appreciate your answers without thinking to ask this question.

    Read the article

  • Developing Functional Specifications based on the UML Model

    A few days ago I found this white paper I did around 2004 way before I started really blogging:The Process OverviewUse-case to Specifications is a processing using UML use-cases to identify user requirements and model systems to be able to properly define functionality. This document is intended to serve as an execution based walk-through of this process.As background: The Unified Modeling Language (UML) is a language for specifying, visualizing, constructing, and documenting the artifacts of software...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • Why can't non-admin users install software?

    - by fiftyeight
    This is probably something I don't understand since I am used to Windows and am only starting out with Ubuntu. I know that software in linux comes in packages what I don't understand is why can't non-admin users install software. I mean, every application is run by a specific user, and that user will only be able to run that applciation with his privilages, so if he has no admin privileges, the application also won't be able to access unauthorized directories etc. I want most of the time to work on my PC with a non-admin user since it seems more safe to me, most of the time I have no need for admin privileges. and even though I know viruses in linux are uncommon I still think the best practice is to work on the computer in a state that you yourself can't make any changes to important files, that way viruses also can't harm any important files, but I need to install software for programming and web-design etc. and first of all I don't want to switch users all the time. But also it sounds safer to me that everything being done on the PC will be done through the non-admin user. I'll be glad to know what misunderstanding I have here, cause something here doesn't sound right.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Do ALL your variables need to be declared private? [closed]

    - by skizeey
    Possible Duplicate: Why do we need private variables? I know that it's best practice to stay safe, and that we should always prevent others from directly accessing a class' properties. I hear this all the time from university professors, and I also see this all the time in a lot of source code released on the App Hub. In fact, professors say that they will actually take marks off for every variable that gets declared public. Now, this leaves me always declaring variables as private. No matter what. Even if each of these variables were to have both a getter and a setter. But here's the problem: it's tedious work. I tend to quickly lose interest in a project every time I need to have a variable in a class that could have simply been declared public instead of private with a getter and a setter. So my question is, do I really need to declare all my variables private? Or could I declare some variables public whenever they require both a getter and a setter?

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • Silverlight, JavaScript and HTML 5 - Who wins?

    - by Sahil Malik
    SharePoint 2010 Training: more information   Disclaimer: These are just opinions. In the past I have expressed opinions about the future of technology, and have been ridiculously accurate. I have no idea if this will be accurate or not, but that is what it’s all about. Its opinions, predicting the future.   This topic has been boiling inside me for a while, and I have discussed it in private gettogethers with fellow minded techies. But I thought it would be a good idea to put this together as a blogpost. There is some debate about the future of Silverlight, especially in light of technologies such as newer faster browsers, and HTML 5. As a .NET developer, where do I invest my time and skills – remember you have limited time and skills, and not everything that comes out of Microsoft is a smashing success. So it is very very wise for you to consider the facts, macro trends, and allocate what you have limited amounts of – “time”. Read full article ....

    Read the article

  • Grub2 attempting to boot hd1 when it should boot hd0

    - by JoBu1324
    I'm attempting to perform a "normal" install on a USB3 SSD (I don't know if it is noteworthy, but I don't have a swap partition). The installation proceeds normally (I'm installing from a USB2 device I created using LiLi Boot, with a copy of Ubuntu 12.10 64bit that I downloaded directly from the source. The system I'm running Ubuntu on has had a more traditional installation of ubuntu running on it without issue (also 12.10), so I know that everything works A-OK when booting from a 7200RPM internal disk. There are a number of oddities that I've noticed so far, including graphics corruption, but the first and most pressing issue is that Grub2 refuses to recognize the correct hd. From /boot/grub/grub.cfg: if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd1,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1 b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 else search --no-floppy --fs-uuid --set=root b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 fi font="/usr/share/grub/unicode.pf2" fi This is from a 100% fresh install of linux (first boot), which was installed while no hard drives were connected to the system, other than the USB2 LiLi drive. The system refuses to boot unless I change the hd1,msdos1 - hd0,msdos1 in the grub menu at boot, when it is the only disk device connected to the PC. What options are left for me to troubleshoot this issue? I've been racking my brains and taxing the internet trying to dig up something on this problem, but now I'd like to see if the Ubuntu community can rise to the challenge and help me fix this boot problem. This is the second time I've attempted this particular setup. The first time, after days of wasted time, I managed to get it to boot every other boot - i.e. every even boot it would boot into Ubuntu like it was happy; every odd boot it would boot into the BusyBox or Grub prompt. At one point it complained that it couldn't find /dev/disk/by-uuid/[the disk], which I found most perplexing, since the disk was there and booted before and after the occurrence (with intervention).

    Read the article

  • Vulnérabilité critique dans Office, Microsoft recommande d'effectuer au plus tôt la mise à jour de sécurité

    Microsoft met en garde contre l'exploitation de failles de sécurité dans Office Et recommande d'effectuer au plus tôt la mise à jour de sécurité Microsoft alerte sur une nouvelle vulnérabilité jugée critique dans le traitement de texte Microsoft Office Word. La faille permet l'exécution de code distant si un utilisateur ouvre ou pré-visualise un e-mail contenant des données RTF. L'exploitation de cette faille permet à un pirate d'obtenir les mêmes droits d'utilisateur que l'utilisateur local. La vulnérabilité avait déjà été corrigée dans un bulletin de sécurité (Pacht Tuesday) émis par Microsoft en novembre dernier. Mais une nouvelle exploitation de celle-ci sur internet v...

    Read the article

  • Dynamic (C# 4.0) &amp; Var in a nutshell.

    - by mbcrump
    A Var is static typed - the compiler and runtime know the type. This can be used to save some keystrokes. The following are identical. Code Snippet var mike = "var demo"; Console.WriteLine(mike.GetType());  //Returns System.String   string mike2 = "string Demo"; Console.WriteLine(mike2.GetType()); //Returns System.String A dynamic behaves like an object, but with dynamic dispatch. The compiler doesn’t know anything about it at compile time. Code Snippet dynamic duo = "dynamic duo"; Console.WriteLine(duo.GetType()); //System.String //duo.BlowUp(); //A dynamic type does not know if this exist until run-time. Console.ReadLine(); To further illustrate this point, the dynamic type called “duo” calls a method that does not exist called BlowUp(). As you can see from the screenshot below, the compiler is reporting no errors even though BlowUp() does not exist. The program will compile fine. It will however throw a runtimebinder exception after it hits that line of code in runtime. Let’s try the same thing with a Var. This time, we get a compiler error that says BlowUp() does not exist. This program will not compile until we add a BlowUp() method.  I hope this helps with your understand of the two. If not, then drop me a line and I’ll be glad to answer it.

    Read the article

  • Errors with linux-image-3.8.0-36-generic and GRUB

    - by user285239
    OS: Ubuntu 12.04 LTS Problem 1: When installing anything (or updating) it always ends with this error: Errors were encountered while processing: linux-image-3.8.0-36-generic linux-image-3.8.0-38-generic linux-image-generic-lts-raring linux-generic-lts-raring Problem 2: When installing or updating stuff it opens some grub files and halts execution until I close these grub files. See screenshot below. Grub files + terminal window (screenshot) Pastebin with terminal output while trying to install something I should mention that I don't known if these two errors are related, but it is a fact that every time I try to install something or run an update both of the above errors pop up. I couldn't find anything of interest in the logs I looked at (probably because I don't know what/where to look), but tell me if you need me to upload something. Edit 02 june. This is the output from lsb_release -a kasper@ubuntuRW:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.4 LTS Release: 12.04 Codename: precise

    Read the article

  • How to prevent script not to stop after apt-get?

    - by Eonil
    I keep some bash snippets and copy&paste them when I needed for management. But I discovered apt-get cancels script execution. Here's my script where problematic. apt-get -y install gcc g++ make cmake perl cd ~/ mkdir t1 cd t1 I copy & paste this script on OS X Terminal to Ubuntu 12.04 LTS server (fresh install on VM) Script always stop after apt-get finished. I run this command with root account like this. ssh user1@server <password…> sudo su <password…> apt-get -y install gcc g++ make cmake perl cd ~/ mkdir t1 cd t1 Can this be a problem? Or why my script stops after apt-get finished, and how to make it to continue?

    Read the article

  • C++ vainqueur d'un benchmark avec Java, Scala et Go présenté aux Scala Days, l'étude portait sur l'implémentation d'un algorithme

    C++ vainqueur d'un benchmark avec Java, Scala et Go Présentée aux Scala Days, l'étude portait sur l'implémentation d'un algorithme Bonne nouvelle pour tous les amateurs de C++ ! Ce langage reste le plus performant et sans conteste ! Présenté au Scala Days en début de mois, un benchmark met en compétition le C++, Java, Scala et GO pour l'implémentation du même algorithme en cherchant à s'appuyer sur les éléments du langage (pas de Boost ici donc). Et C++ remporte haut la main en temps d'exécution mais aussi en empreinte mémoire. Mieux, contrairement à certaines idées reçues, les temps de compilation ou le nombre de ligne de code restent à des valeurs qui n'ont pas à rougir face à Java par exemple....

    Read the article

  • BicaVM : une implémentation de la machine virtuelle Java en JavaScript

    BicaVM : l'implémentation de la machine virtuelle Java en JavaScript Les navigateurs pourront dans un futur proche intégrer une sorte de machine virtuelle, permettant d'exécuter du code d'un langage autre que du JavaScript. C'est la vision d'un développeur qui vient de mettre sur pied une machine virtuelle Java en JavaScript. Arthur Ventura, un développeur portugais des solutions open sources, vient de présenter BicaVM, une implémentation de la machine virtuelle Java (JVM) en JavaScript, capable de fonctionner dans n'importe quel navigateur moderne. La principale difficulté du port de la JVM en JavaScript est le temps d'exécution du bytecode. Cependant, avec les importantes augmentations de la vitesse d'ex...

    Read the article

  • How to write loosely coupled classes in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain at least 1 line of extraneous code which has nothing to do with data access. Multiply this for all methods and all classes and you will very soon have an unmanageable "code soup". What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >