Search Results

Search found 18845 results on 754 pages for 'wayback machine'.

Page 164/754 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Boot-Repair after messing up NTFS partition

    - by QuietThud
    I posted this question explaining what happened after I tried to create a new swap partition on a Win/Ubuntu dual-boot machine. I have since created a live-boot USB and installed Boot-Repair. I had it "recommend repairs", after which it tried repairing the wubi filesystems, which as far as I'm aware was not necessary. I'm not sure where to go from here; I don't care very much about backing up my files, I just want to be able to boot the machine. In the Advanced Options the "Repair Windows boot files" box is uncheckable both GRUB tabs are unclickable (I do have GRUB installed) Here is my Pastebinit with the details from the Boot-Repair. Please be as explicit as possible, as I am proving to be disproportionately bad at this type of task. Thank you! P.S. I keep seeing: cryptsetup: WARNING: failed to detect canonical device of overlayfs cryptsetup: WARNING: could not determine root device from /etc/fstab TestDisk: GPARTED:

    Read the article

  • Does 64-bit Ubuntu work on the Acer Aspire One D255

    - by hippietrail
    The Acer Aspire One D255 is the cheapest dual core netbook on the market right now. It has an Intel Atom N550 which should be able to run a 64-bit OS. But when I try to boot the Ubuntu 64-bit live CD I only get one line of diagnostic output that it "found something" on the USB CD drive before locking up. I haven't been able to find anything by Googling yet. Could it just be driver issues for this machine or could the platform be inherently frail for running 64-bit? (My machine is two days old on trial and Windows 7 and Ubuntu 32-bit run but it has locked up under casual use on both OSes.)

    Read the article

  • Minimum to install for a visual web browser in Ubuntu Server

    - by Svish
    I have set up a machine with Ubuntu Server. Some of the server software I want to run on it has web based user interfaces for setting it up et cetera. I know I could connect to it from a different machine which has a graphical user interface, but in this case I would rather do it on the box. So, from a fresh Ubuntu Server installation, what is the minimum I need to install to be able to launch a web browser I can use for this? For example chromium, firefox or arora.

    Read the article

  • Ubuntu 13.10 install VMware 9.0

    - by user212290
    After I install the VMware workstation 9.0, while when I want open the VM, there come the dialogue "Before you can run VMware, several modules must be complied and loaded into the running kernel CANCEL INSTALL",while I clicked the INSTALL button, nothing happened. When: sudo apt-get install linux-headers-3.11.0-12-generic sudo /usr/bin/vmware-modconfig --icon=vmware-workstation --appname=VMware come: cc1: some warnings being treated as errors make[2]: *** [/tmp/modconfig-T9k19t/vmci-only/linux/driver.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [_module_/tmp/modconfig-T9k19t/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.11.0-12-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/modconfig-T9k19t/vmci-only' Failed to build vmci. Failed to execute the build command. Starting VMware services: Virtual machine monitor done Virtual machine communication interface failed VM communication interface socket family done Blocking file system done Virtual ethernet failed VMware Authentication Daemon done

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • System Slow After Uprading Ubuntu

    - by Aragon N
    i have an ubuntu network machine which has release of 10.04.1 LTS Lucid. on this system i have apache, postgresql and django. for some app. development i have to install php and php-curl... due to being on network, i have exported wmvare machine to internet and firstly i have upgraded system and then install php5 packages on it. After all replacing it with its old place, i have considered that the new system query is some slow according to another. Old system query time : 140 ms New system query time : 9.11 s i have checked /etc/network interface and it seems there is no problem. i have checked /etc/resolv.conf and it seems ok i have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then i have checked time host -t A services.myapp.com and i got real 0m0.355s user 0m0.010s sys 0m0.020s and now what can i have to check for boosting my system as before?

    Read the article

  • Parallel Port Problem in 12.04

    - by Frank Oberle
    I have a “dumb” printer attached to a parallel port in my machine which works fine under the “other” resident operating system (from Redmond) on the same machine. I recently added Ubuntu 12.04 as a dual boot on the machine, but Ubuntu doesn't seem to recognize the parallel port at all. All I need to set up a printer is a really plain-vanilla fixed pitch text-only generic driver, which is present, but no parallel ports show up. (The other printers, all on USB ports, seem to work just fine). Following what appeared to me to be the most reasonable of the many conflicting pieces of advice on the web, here's what I did: I added the following lines to /etc/modules parport_pc ppdev parport Then, after rebooting, I checked to see that the lines were still present, and they were. I ran dmesg | grep par and got the following references in the output that seemed like they might have to do with the parallel port: [ 14.169511] parport_pc 0000:03:07.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 14.169516] PCI parallel port detected: 9710:9805, I/O at 0xce00(0xcd00), IRQ 21 [ 14.169577] parport0: PC-style at 0xce00 (0xcd00), irq 21, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] [ 14.354254] lp0: using parport0 (interrupt-driven). [ 14.571358] ppdev: user-space parallel port driver [ 16.588304] type=1400 audit(1347226670.386:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=964 comm="apparmor_parser" [ 16.588756] type=1400 audit(1347226670.386:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=964 comm="apparmor_parser" [ 16.673679] type=1400 audit(1347226670.470:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1010 comm="apparmor_parser" [ 16.675252] type=1400 audit(1347226670.470:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1014 comm="apparmor_parser" [ 16.675716] type=1400 audit(1347226670.470:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1014 comm="apparmor_parser" [ 16.676636] type=1400 audit(1347226670.474:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1015 comm="apparmor_parser" [ 16.677124] type=1400 audit(1347226670.474:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1015 comm="apparmor_parser" [ 1545.725328] parport0: ppdev0 forgot to release port I have no idea what any of that means, but the line “parport0: ppdev0 forgot to release port ” seems unusual. I was still unable to add a printer for my old clunker, so I tried the direct approach, typing echo “Hello” > /dev/lp0 and received a Permission denied message. I then tried echo “Hello” > /dev/parport0 which didn't give me any message at all, but still didn't print anything. Running the command sudo /usr/lib/cups/backend/parallel gives the following: direct parallel:/dev/lp0 "unknown" "LPT #1" "" "" Checking the permissions for /dev/parport0, Owner, Group, and Other are all set to read and write. crw-rw---- 1 root lp 6, 0 Sep 9 16:37 /dev/lp0 crw-rw-rw- 1 root lp 99, 0 Sep 9 16:37 /dev/parport0 The output of the command lpinfo -v includes the following line: direct parallel:/dev/lp0 I've read several web postings that seem to suggest this has been a problem for several years, but the bug reports were closed because there wasn't enough information to address the issue (shades of Microsoft!). Any suggestions as to what I might be missing here?

    Read the article

  • Sound notification over SSH

    - by Lekensteyn
    I just switched from the Konversation IRC client to the terminal based IRSSI. I'm starting IRSSI on a remote machine using GNU screen + SSH. I do not get any sound notification on new messages, which means that I've to check out IRSSI once in a while for new messages. That's not really productive, so I'm looking for an application / script that plays a sound (preferably /usr/share/sounds/KDE-Im-Irc-Event.ogg and not the annoying beep) on my machine if there is any activity. It would be great if I can disable the notification for certain channels. Or, if that's not possible, some sort of notification via libnotify, thus making it available to GNOME and KDE.

    Read the article

  • How to setup passwordless SSH access for root user

    - by Cerin
    I need to configure a machine so software installation can be automated remotely via SSH. Following the wiki, I was able to setup SSH keys so my user can access the machine without a password, but I still need to manually enter my password when I use sudo, which obviously an automated process shouldn't have to do. Although my /etc/ssh/sshd_config has PermitRootLogin yes, I can't seem to be able to login as root, presumably because it's not a "real" account with a separate password. How do I configure SSH keys, so a process can remotely login as root on Ubuntu?

    Read the article

  • Is it safe to convert Windows file paths to Unix file paths with a simple replace?

    - by MxyL
    So for example say I had it so that all of my files will be transferred from a windows machine to a unix machine as such: C:\test\myFile.txt to {somewhere}/test/myFile.txt (drive letter is irrelevant at this point). Currently, our utility library that we wrote ourselves provides a method that does a simple replace of all back slashes with forward slashes: public String normalizePath(String path) { return path.replaceAll("\\", "/"); } Slashes are reserved and cannot be part of a file name, so the directory structure should be preserved. However, I'm not sure if there are other complications between windows and unix paths that I may need to worry about (eg: non-ascii names, etc)

    Read the article

  • Ubuntu 12.10 with kernel 3.5.0-18 failing to boot

    - by Chaitanya
    I had 12.04 with kernel 3.0.2. Today I have updated my system and got 12.10 with kernel 3.5.0-18. Now when I boot my machine with 3.5 kernel, it starts until the page where I enter my password. Within seconds, I get a page with looooong list of some commands or list. I can't take screen-shot of that. It looks something like: [1.2234978942837]kjsahfa;lsfksld;fkjsf;owieurwirejw/rnw;erkjwelrjw2309480432 [1.3294823498230948]as;lfjsf;iuwrijrwjlkerjw;rekwer;lkwjre;lkjRIJWEORIWE'JJA; Luckily, in my boot page, I have 3.0.2 kernel also. When I boot with 3.0.2 kernel there is no problem. But when I boot with 3.5.0, it throws that weird error. I wont be able to do anything at that time. None of the keys work. I have to forcibly shutdown the machine and restart with 3.0.2 kernel. Please help.

    Read the article

  • Why does my RAID configuration with mdadm fail on reboots?

    - by Andy B
    I have been running Ubuntu server on my machine for 2 years and it has worked ok. I would like to speed it up by raiding a few drives. The machine is used to host my Mysql databases internally. using MDADM raid.. I have tried 2 schemes so far with the 3 drives. 2 partitions on each drive 1 for the swap 1 for the O/S both of them turned into drives raid level 5 3 partitions on each drive 1 for the boot 1 for the swap and 1 for the root. The boot I set to raid level1 and the swap and root raid drives were set to level5 Both setup worked fine for about a week, then on a reboot things fall apart. by fall apart I mean I end up with a bunch of hard drive errors on the screen and then get a grub prompt. Why do they fail on reboots? I am eager to understand what I am doing wrong thanks!

    Read the article

  • Is slower performance, of programming languages, really, a bad thing?

    - by Emanuil
    Here's how I see it. There's machine code and it's all that computers needs in order to run something. Computers don't care about programming languages. It doesn't matter to them whether the machine code comes from Perl, Python or PHP. Programming languages don't serve computers. They serve programmers. Some programming languages run slower than others but that's not necessarily because there is something wrong with them. In many cases, it's because they do more things that programmers would otherwise have to do (i.e. memory management) and by doing these things, they are better in what they are supposed to do - serve programmers. So, is slower performance, of programming languages, really, a bad thing?

    Read the article

  • Keyboard settings for a Mac+PC world

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information I’m one of those weridos who lives in a Mac+PC world. I write code for both iOS and Windows platforms. I also travel quite a bit, and airlines and airport security are starting to weigh your carry ons, and beginning to frown on the powerplant of batteries you need to carry to power SharePoint on an airplane. This means, my main work machine has to be a Macbook Pro, since it is the only machine that can do both XCode and Visual Studio Virtualized and SharePoint virtualized nicely. The problem this causes of course, is you will literally pull your hair out when dealing with keyboard/shortcut differences. So here is my work setup, Running Mac for all my normal work Virtualizing using VMWare Fusion – and sometimes I move these VMs to my windows server so I can run them on VMware workstation. Frequently RDP’ing into VMs in the cloud or running on my home server. So, Read full article ....

    Read the article

  • Criteria for selecting timeout value?

    - by stijn
    Situation: a piece of software reads frames of data from a file in a seperate thread and puts it on a queue, emptied by another thread. That second thread periodically checks on the queue and fails rather gracefully, by showing an error message stating the read timed out, if no data is available within a certain amount of time. Initially this timeout was set to 200mSec. There was no real reasoning behind that constant though, but it worked fine. We measured on a couple of machines and for large data frames, larger than what would be used by customers, a read took like 20mSec whith no other load on the machine. However one customer now gets timeout errors now and then (on the second try all is fine, probably the file is in cache or the virus scanner leaves it alone). The programmers are like 'well, yeah, but that customer's machine is full of cruft, virus scanners, tons of unneeded background processes etc'. Of course the customer is like 'hey this should just work, shouldn't it'? While the programers have a point, since the software is heavy enough to validate the need for a dedicated machine, that does not make the customer happy. Increasing the timeout to 2 seconds, for example, solves the problem. But I'd like to make a proper decision now instead of just randomly pick some magic constant that is probably ok in 99% of cases. What criteria should be used for that? We could just pick a large number, but that feels wrong. (and then we end up with a program that has the horrible bahaviour of hanging when trying to read from a disconnected drive for instance, whereas we'd rather make it show an error right away). Or we could make the timeout value a user setting, but then we need to ducument it clearly and even then not all customers are tech savy enough to really understand what it does. Or we could try and wait until another customer reports timeouts and increase the value again. And again. Until we find something ok for 99.99% of the cases.. Any good practice for this type of situation?

    Read the article

  • How do we install Unity-2D and dependencies offline?

    - by Takkat
    We have installed 11.04 32-bit on an old machine that has no internet connection and with a graphics card that is not suitable for running Compiz or Unity. Still, we would like to run Unity-2D on this machine. We are aware of answers to this Question. Sadly Keryx will not run on 11.04 32-bit because of unmet dependencies. Building an offline repository is not an option because of limited storage capacity. Is there any convenient other way to find, download, install, and eventually update unity-2 and all dependencies (preferably from an OS independent download path)?

    Read the article

  • Does the Ubuntu mini.iso work with EFI?

    - by jean388
    I have to install Ubuntu 11.10 from the AMD64 mini.iso on a system with UEFI BIOS motherboard. I have configured a virtual machine in VirtualBox for making a test install before I setup the real system. In VirtualBox I have enabled EFI. When the virtual machine is powered on and boot mini.iso the Grub commandline is shown. If I try to boot the normal Ubuntu CD it works fine and I get the normal boot options "Install Ubuntu" etc. Does Ubuntu mini.iso not work with EFI?

    Read the article

  • E3 2012 : Tout savoir sur la Wii U : les caractéristiques de la console de Nintendo, les jeux et autres détails

    Tout savoir sur la Wii U Les caractéristiques, les jeux, les détails et le reste À l'occasion de l'E3, nous avons pu apprendre beaucoup sur la Wii U, la prochaine console de Nintendo. Commençons tout d'abord par les caractéristiques de la machine : Machine :[IMG]http://jeux.developpez.com/news/images/WiiU.jpg[/IMG] 4.57 cm de hauteur sur 26.67cm en longueur sur 17.2cm de large ; 1.5kg ; le processeur est basé sur un IBM Power ayant plusieurs coeurs ; la carte graphique est basée sur AMD Radeon HD mémoire flash, supporte aussi les cartes SD et possède quatre ports USB (deux à l'avant et deux à l'arrière) ; lec...

    Read the article

  • Ubuntu ver 14.04 Network discovery not showing up on windows 8 but on windows 7

    - by Schwabber
    I have an old PC that is now my new Ubuntu machine. Currently I was working on sharing a drive so that backups and streaming could take place. I have it set up perfectly on my windows 7 laptop (able to read and write to it). For some reason however my wife's windows 8 laptop is not showing up on the Ubuntu and vice versa. I turned on network discovery on the win8 machine, but that didn't help. Thanks in advance edit- I have my win7 and win8 in the same homegroup and both can see each other in the network. Also the workgroup is the same.

    Read the article

  • Synergy - easy share of keyboard and mouse between multiple computers

    Did you ever have the urge to share one set of keyboard and mouse between multiple machines? If so, please read on... Using multiple machines Honestly, as a software craftsman it is my daily business to run multiple machines - either physical or virtual - to be able to solve my customers' requirements. Recent hardware equipment allows this very easily. For laptops it's a no-brainer to attach a second or even a third screen in order to extend your native display. This works quite handy and in my case I used to attached two additional screens - one via HD15 connector, the other via HDMI. But... as it's a laptop and therefore a mobile unit there are slight restrictions. Detaching and re-attaching all cables when changing locations is one of them but hardware limitations, too. After all, it's a laptop and not a workstation. I guess, that anyone working in IT (or ICT) has more than one machine at their workplace or their home office and at least I find it quite annoying to have multiple sets of keyboard and mouse conquering my remaining space on my desk. Despite the ugly looks of all those cables and whatsoever 'chaos of distraction' I prefer a more clean solution and working environment. This allows me to actually focus on my work and tasks to do rather than to worry about choosing the right combination of keyboard/mouse. My current workplace is a patch work of various pieces of hardware (approx. 2-3 years): DIY desktop on Ubuntu 12.04 64-bit, Core2 Duo (E7400, 2.8GHz), 4GB RAM, 2x 250GB HDD, nVidia GPU 512MB Dell Inspiron 1525 on Windows 8 64-bit, 4GB RAM, 200GB HDD HP Compaq 6720s on Windows Vista 32-bit, Core2 Duo (T5670, 1.8GHz), 2GB RAM, 160GB HDD Mac mini on Mac OS X 10.7, Core i5 (2.3 GHz), 2GB RAM, 500GB HDD I know... Not the latest and greatest but a decent combination to work with. New system(s) is/are already on the shopping list but I live in the 'wrong' country to buy computer hardware. So, the next trip abroad will provide me with some new stuff. Using multiple operating systems The list of hardware above already names different operating systems, and actually I have only one preference: Linux. But still my job as a software craftsman for Visual FoxPro and .NET development requires other OSes, too. Not a big deal, it's just like this. Additionally to those physical machines, there are a bunch of virtual machines around. Most of them running either Windows XP or Windows 7. Since years I have the practice that each development for one customer is isolated into its own virtual machine and environment. This keeps it clean and version-safe. But as you can easily imagine with that setup there are a couple of constraints referring to keyboard and mouse. Usually, those systems require their own pieces of hardware attached. As stated, I don't like clutter on my desk's surface, so a cross-platform solution has to come in here. In the past, I tried it with various applications, hardware or network protocols like X11, RDP, NX, TeamViewer, RAdmin, KVM switch, etc. but the problem in this case is that they either allow you to remotely connect to the other system or exclusively 'bind' your peripherals to the active system. Not optimal after all. Synergy to the rescue Quote from their website: "Synergy lets you easily share your mouse and keyboard between multiple computers on your desk, and it's Free and Open Source. Just move your mouse off the edge of one computer's screen on to another. You can even share all of your clipboards. All you need is a network connection. Synergy is cross-platform (works on Windows, Mac OS X and Linux)." Yep, that's it! All I need for my setup here... Actually, I couldn't believe it myself that I didn't stumble over synergy earlier but 'Get over it' and there we go. And despite the fact that it is Open Source, no, it's also for free. Donations for the developers are very welcome and recently they introduced Synergy Premium. A possibility to buy so-called premium votes that can be used to put more weight / importance on specific issues or bugs that you would like the developers to look into. Installation and configuration Simply download the installation packages for your systems of choice, run the installer and enter some minor information about your network setup. I chose my desktop machine for the role of the Synergy server and configured my screen setup as follows: The screen setup allows you currently to build or connect up to 15 machines. The number of screens can be higher as those machine might have multiple screens physically attached. Synergy takes this into the overall calculations and simply works as expected. I tried it for fun with a second monitor each connected to both laptops to have a total number of 6 active screens. No flaws after all - stunning! All the other machines are configured as clients like so: Side note: The screenshot was taken on Windows 8 and pasted via clipboard into Gimp running on Ubuntu. Resume Synergy is now definitely in my box of tools for my daily work, and amongst the first pieces of software I install after the operating system. It just simplifies my life and cleans my desk. Never again without Synergy!Now, only waiting for an Android version to integrate my Galaxy Tab 10.1, too. ;-) Please, check out that superb product and enjoy sharing one keyboard, one mouse and one clipboard between your various machines and operating systems.

    Read the article

  • After some wired flash out, I can't login to wmii any more, how to fix it?

    - by Zen
    I've been using wmii on Ubuntu14.04(virtual machine on win7) for months. During which, I got pop out to login interface servaral times due to some wired mouse click action. But today, after I met such wired pop out, I can't login to wmii any more. I'll be stuck at the interface like The bottom yellow bar is the command area for wmii. but it has no response when I press Mod + p I restart my machine, and even reinstalled wmii, but everytime when I tried to login wmii, I stuck at that interface. By the way, I login to wmii from the login interface, where I can choose between Gnome and wmii. How to fix this? I'm crying for help! ps: I can login to gnome normally

    Read the article

  • Why can't I get to a shared folder from Windows

    - by Ron
    I want to access a folder on my new Ubuntu 12.10 box from any machine on my network without the need to provide credentials. My machine name is Ubuntu1 I have a 2TB disk that formatted NTFS that has media on it The mount point is mount1 I have numerous folders on the disk and I want to share each of them individually I have enabled folder1 and folder2 for sharing I have enabled shared access, Allow others to create and delete files in this folder and guest access is allowed. The folder icon now has arrows so I assume all is good. From windows I can see under network Ubuntu1folder1 Ubuntu1folder2 When I click to open the folder from windows I get the error "You cannot access \Ubuntu1\folder1 You do not have permission to access \Ubuntu1\folder1 I have them both on the same workgroup. Your assistance would be appreciated

    Read the article

  • Juju instances in aganet-state: down after turning them off (and back on) on EC2

    - by Tyler McAdams
    I turned my Juju instances off on EC2 for a while and after bringing them back online they seem to be in an odd state: [code] claude-vm@claude-vm-fusion:~/Documents/Shell Scripts$ juju status 2012-11-17 17:06:44,094 INFO Connecting to environment... 2012-11-17 17:06:45,590 INFO Connected to environment. machines: 0: agent-state: not-started dns-name: ec2-54-242-142-196.compute-1.amazonaws.com instance-id: i-b0996fcf instance-state: running 1: agent-state: down dns-name: ec2-50-19-186-245.compute-1.amazonaws.com instance-id: i-8c8375f3 instance-state: running 2: agent-state: down dns-name: ec2-54-242-255-238.compute-1.amazonaws.com instance-id: i-56807629 instance-state: running services: wordpress: charm: cs:precise/wordpress-9 exposed: true relations: db: - wordpress-db loadbalancer: - wordpress units: wordpress/0: agent-state: down machine: 2 open-ports: - 80/tcp public-address: ec2-54-242-227-57.compute-1.amazonaws.com wordpress-db: charm: cs:precise/mysql-10 relations: db: - wordpress units: wordpress-db/0: agent-state: down machine: 1 public-address: ec2-54-242-212-177.compute-1.amazonaws.com 2012-11-17 17:06:47,274 INFO 'status' command finished successfully [/code] Can I not take my instances down for a while? Or is this something else?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >