Search Results

Search found 18842 results on 754 pages for 'the machine'.

Page 351/754 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Sync csv file using nodejs

    - by Amit Dugar
    There is a remote csv file that gets updated every second or so. I need to download it(on a Windows machine) ONCE and always sync local file with the remote one. Obviously, downloading the whole file every time is not an option. I need to download only the changes.(something like rsync, rdiff-backup) I searched quite a bit but could not find how I can do this. I am sort of new to nodejs and am using this app as an opportunity to expand my nodejs skills. Also, I am planning to use nodejs and to package it using node-webkit(https://github.com/rogerwang/node-webkit)

    Read the article

  • HD Video Peformance Unacceptable

    - by Mike Hasselbeck
    Was wondering if anyone could help me boost HD 1080p video performance on my machine? I've got an AMD Athlon X2 Dual Core processor, 2 gb RAM & an ATI Radeon 5450 video card. I've installed the latest ATI Catalyst drivers, I installed the hardware acceleration things and linked them (I believe) to VLC. Still, it's still not running as well as I would like. Any thoughts or suggestions? Any help would be much appreciated. Thanks!

    Read the article

  • Interpolation gives the appearance of collisions

    - by Akroy
    I'm implementing a simple 2D platformer with a constant speed update of the game logic, but with the rendering done as fast as the machine can handle. I interpolate positions between actual game updates by just using the position and velocity of objects at the last update. This makes things look really smooth in general, but when something hits a wall/floor, it appears to go through the wall for a moment before being positioned correctly. This is because the interpolator is not taking walls into account, so it guesses the position into walls until the actual game update fixes it. Are there any particularly elegant solutions for this? Simply increasing the update rate seems like a band-aid solution, and I'm trying to avoid increasing the system reqs. I could also check for collisions in the actual interpolator, but that seems like heavy overhead, and then I'm no longer dividing the drawing and the game updating.

    Read the article

  • AppFabric &ndash; where are all the monitoring events?

    - by Shawn Cicoria
    When you’ve just gone through a setup of AppFabric and you’ve got some WF/WCF things happening, if you start looking at the Dashboard and you see nothing, it might be as simple as restarting SQL Agent. I generally don’t reboot my system for several days and after installing AppFabric the SQL Agent jobs didn’t start firing right away.  Yes, even running a boot to VHD, you can still put the machine to sleep (just logoff and click on Sleep)… So, after spending time looking through the SQL monitoring DB that AppFabric was configured to use, I saw a bunch of records in the [AppFabric_Monitoring].[dbo].[ASStagingTable] table.  This table is the stopping point before the SQL Agent job (or Service Broker in SQL Express) pushes the items to their final resting place. This post goes through a few things to check on AppFabric monitoring http://social.technet.microsoft.com/wiki/contents/articles/appfabric-items-to-check-when-configuring-appfabric-monitoring.aspx Of course, during development you might want to clean up regularly For that there’s the PowerShell command Clear-AsMonitoringSqlDatabase -Database AppFabric_Monitoring

    Read the article

  • node-xmpp-bosh error on Ubuntu 11.10

    - by megueloby
    I am newbie in Linux word. I want to implement a bosh server. Because it is hard on Windows platform I decided to deploy it on a Ubuntu virtual machine via vmware. I made installation without problems. I followed the processes on this page. Now I want to test my bosh server with the command sudo bosh or sudo /etc/init.d/bosh start, after typing those I get on the terminal Starting bosh server after, nothing. I looked on the bosh.err file and I see exec: 2: /usr/local/lib/bosh/run-server.js: Permission denied I don? t know why this error with sudo. When I try ls -l /usr/local/lib/bosh/run-server.js it show -rw-r--r-- 1 root root 4889 2012-04-01 18:50 /usr/local/lib/bosh/run-server.js How can I make bosh start?

    Read the article

  • Gnome shell crashing in 11.10 with 'purple haze' effect

    - by Andy
    I've just got Gnome with 11.10 up and running with my netbook and liked it so much I thought I'd get my parents old machine sorted with Ubuntu too. Unity works fine but when I try to switch to Gnome shell, I get problems. On login, the wallpaper appears as normal but then the colour bleeds out from the centre, leaving traces of it around the edge in a purple haze (yes, I do like Jimi Hendrix but I'm not making this up). When I go to applications, typing then pressing enter in the search field dumps me out so there's nothing but File / Edit / View etc in top left; starting a program seems to work but then there's only a white screen and no program window. Gnome classic works fine, from the limited use I've given it. I'm using an Asus desktop with ABIT motherboard, 2.6ghz with 1gb RAM; I've checked drivers and it says I'm up to date, Nvidia graphics. Anyone any ideas?

    Read the article

  • How can I optimize my development machines files/dirs?

    - by LuxuryMode
    Like any programmer, I've got a lot of stuff on my machine. Some of that stuff is projects of my own, some are projects I'm working on for my employer, others are open-source tools and projects, etc. Currently, I have my files organized as follows: /Code --/development (things I'm sort of hacking on plus maybe libraries used in other projects) --/scala (organized by language...why? I don't know!) --/android --/ruby --/employer_name -- /mobile --/android --/ios --/open-source (basically my forks that I'm pushing commits back upstream from) --/some-awesome-oss-project --/another-awesome-one --/tools random IDE settings sprinkled in here plus some other apps As you can see, things are kind of a mess here. How can I keep things organized in some sort of coherent fashion?

    Read the article

  • Assign subdomains to seperate ports on web server

    - by Michael Frank
    I have set up an Abyss web server as a little experiment, and I want to know if it is possible to assign subdomains to different ports on the machine the web server is running on. I have a couple webUIs that I'd like to assign subdomains: 192.168.1.1:8000 becomes example.com/webui1/ 192.168.1.1:8001 becomes example.com/webui2/ The webUIs are available by accessing their ports via example.com:8000. I have tried using a reverse proxy, but it seems that this is only usable on one internal IP at a time. What other options do I have?

    Read the article

  • How do I disable changing proxy settings?

    - by gap
    I've got several machines, running 14.04.1, for kids to access. Each machine has accounts for each kid, as well accounts for several adults. Although I've set a system wide proxy use policy, it doesn't get used by Chrome. Instead, I had to log into each kids account and set a proxy use policy, pointing to tinyproxy/dansguardian, for safe internet access. The problem is that, should the kids get an ounce of computer savvy, they'll figure out that they can launch the proxy settings config panel from Unity, then change the proxy settings to None, and completely bypass the "safe" internet scheme I've setup. Can anyone tell me how I can disable those user accounts from being able to modify their proxy settings (not the "system wide" ones... those arent used.. this is the per-user settings). Thanks

    Read the article

  • How can I reconfigure the nvidia proprietary drivers from the command line (ssh)?

    - by Mathieu Pagé
    I have a linux HTPC (running XBMC) in my living room. This morning I ssh'ed into the machine and did upgrade it to 10.10. When it finaly resarted it says something about running in low quality graphics and eventually returned to a command line login prompt. I ssh'ed in again and did a sudo reboot now. When it came back on this time the image is rapidly scrolling from the top to the bottom of the screen. I guess the installed driver doesn't quite work with the S-Video port on which the TV is connected. previously it was working right with the nvidia proprietary drivers. How can I install thoses without using the GUI tool that comes with Ubuntu?

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

  • Installing wireless drivers without internet access [closed]

    - by Lucas Jones
    Possible Duplicate: How can I install and download drivers without internet? (This is related to my other question; my approach there didn't work.) My friend has (I'm quite sure) a Broadcom wireless chipset. However, he doesn't have any wired internet access on the machine, so his only option is to boot into Windows (he is using Wubi) and download packages there. This means we can't use the Hardware Drivers dialog to install the drivers. He can't fetch the repository information, so the Broadcom driver packages aren't showing up in Synaptic. Is there any way to get Wi-Fi working?

    Read the article

  • What Are the Windows A: and B: Drives Used For?

    - by Jason Fitzpatrick
    The C: drive is the default installation location for Windows, if you have a CD/DVD drive on your machine it’s likely the D: drive, and any additional drives fall in line after that. What about the A: and B: drives? Image by Michael Holley. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. HTG Explains: What is DNS? How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • How do I apt-get -y dist-upgrade without a grub config prompt?

    - by fratrik
    Per Make apt-get (or aptitude) run with -y but not prompt for replacement of configuration files? I did the following: ec2run ami-3c994355 --region us-east-1 -n 1 -t m1.large -z us-east-1d On the machine: sudo apt-get update sudo apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" dist-upgrade I still get a prompt asking me which config file I want to use. These are the lines that come before the prompt: Setting up grub-pc (1.99-21ubuntu3.1) ... then: +-------------------------------------------------------¦ Configuring grub-pc +-------------------------------------------------------+ ¦ A new version of configuration file /etc/default/grub is available, but the version installed currently has been locally modified. ¦ ¦ ¦ ¦ What do you want to do about modified configuration file grub? ¦ ¦ ¦ ¦ install the package maintainer's version ¦ (Unrelated, was this too much information?)

    Read the article

  • Windows 7 Virtual PC - &ldquo;RPC server unavailable&rdquo;

    - by Kelly Jones
    I use Windows 7 Virtual PC on my current project and I often bring home the files, so I can work some in the evenings.  Since my VHDs are large, I’ll only copy the undo disks, saved state, and virtual machine config files from my external drive.  I copy them to a small portable drive and once I get home, I’ll copy them to a large external drive. I’ve done this for over a year, but recently I started getting an error when I tried to start the VPC after the copying was finished.  It would open the initial window with the progress bar, but eventually the bar would stop, turn red, and then the error “RPC server unavailable” would appear.  When I first started seeing these, I’d try again, but no luck. After some testing, it turns out that my small portable drive is apparently going bad, so it was corrupting the files.  Lucky for me, that I never overwrote my good copies with corrupted copies, at least not at both the office and at home.

    Read the article

  • How to enable ping in windows firewall in windows server 2008 r2

    - by ybbest
    If you are unable ping your windows server 2008 r2 machine or if you have a “one way ping problem”. You need to check whether you have it enabled in your windows firewall.To enable it , you need to do the following: 1. You need to go to control panel >> windows firewall >> Advanced settings 2. Go to Inbound Rules and enable File and Printer Sharing (Echo Request – ICMPv4-In),after you have done this ,your computer will become pingable.

    Read the article

  • Arabic disappeared after 12.04 upgrade!

    - by Aboubakr
    Well, I was amongst the 12.04 Beta upgraders, and since then I've lost the ability to write in Arabic, and I've been using Ubuntu since 2008 as an only OS without any issues, and been upgrading since then as well, except on this machine, which received one upgrade from 11.10 to 12.04 and it got messed up. I've added Arabic like usual, but it doesn't change with the keyboard's short-cut, and when I do it manually with the Mouse, then it just doesn't work, and it keeps writing in English instead. I've tried to install some iBus things, and added Arabic-kbd (m17n) but it still remains messy, let alone not having the same layout, and all I want is to get to NORMAL. So, please, is there any way to reset or initialize these keyboard related settings, so I can get back to normal and stop using the Mac just to type in Arabic, or so often using XP over Vbox? And please, no Re-install option! I just can't backup all my work right now, and there are a lot of tasks waiting for me to get them done. Thanks for any kind of support :)

    Read the article

  • Network does not connect at boot

    - by Daniel Svozil
    I am on Ubuntu 12.04, fresh install. When I boot a machine, the boot screen says Connecting to network, later it is changed to something like Did not connect, trying for another 60s. However, the network does not connect at the boot. But I can then log in without a network connection, and if I start a network manager service manually from the terminal (sudo service network-manager start), the network is connected without any problems. Please, does anybody know where the problem could be? I don't want to wait more than two minutes every computer restart :-). I am new to Ubuntu (and also to upstart) so I am a bit lost. There is no /var/log/messages, in dmesg I found this record, though it may not be related: init: network-interface (eth1) pre-start process (492) terminated with status 1 init: network-interface (eth1) post-stop process (548) terminated with status 1 Thanks Daniel

    Read the article

  • Participez aux ateliers de certification Oracle à Paris le 30 octobre & 09 novembre 2012

    - by mseika
    Participez aux ateliers de certification Oracle à Paris le 30 octobre & 09 novembre 2012 Remportez la préférence de vos clients et prospects grâce à vos spécialisations Oracle ! Dans la continuité de votre démarche vers la certification Oracle, nous vous proposons 2 demi-journées "spéciale ateliers de certification" à Paris. Réservez votre matinée du 30 octobre ou du 09 novembre prochains pour passer les certifications indispensables à votre entreprise pour être spécialisée.Les ateliers auront lieu à Paris Saint Lazare de 9h à 12h30 au :Centre M2i20 rue d'Athènes75009 PARISNe manquez pas cette occasion, de nombreux ateliers au choix vous sont proposés. Attention, le nombre de places est limité. Programme des ateliers de certifications :- Oracle Software : Oracle Database 11g, Database Security, Data Integration, Data Warehousing, Oracle Business Intelligence Foundation, Exadata Database Machine, Exalogic Elastic Cloud, SOA... - Oracle Hardware : Oracle Linux, Oracle Solaris, SPARC Entry & Midrange, SPARC T-Series Servers, Stockage Unifié, Virtualisation Les ateliers seront suivis d'un déjeuner. Des pré-requis sont nécessaires pour passer ces examens en ligne.Vérifiez-les

    Read the article

  • RTL8192SU-based Wi-Fi adapter disconnects permanently

    - by leventov
    I've already tried all possible (http://ubuntuforums.org/showpost.php?p=10129571&postcount=43) solutions, no progress. I'm in despair. In Windows (on the same machine) this adapter works stably. Device: Trendnet TEW-649UB. System details: Ubuntu 11.10; leventov@leventov-ubuntu:~$ uname -a Linux leventov-ubuntu 3.0.0-12-generic #20-Ubuntu SMP Fri Oct 7 14:56:25 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux leventov@leventov-ubuntu:~$ lsusb ... Bus 002 Device 002: ID 0bda:8172 Realtek Semiconductor Corp. RTL8191S WLAN Adapter ... leventov@leventov-ubuntu:~$ dmesg | grep 8712 #current driver [ 8.146510] r8712u: module is from the staging directory, the quality is unknown, you have been warned. [ 8.147113] r8712u: DriverVersion: v7_0.20100831 [ 8.147124] r8712u: register rtl8712_netdev_ops to netdev_ops [ 8.147127] r8712u: USB_SPEED_HIGH with 4 endpoints [ 8.147478] r8712u: Boot from EFUSE: Autoload OK [ 8.551272] r8712u: CustomerID = 0x0000 [ 8.551275] r8712u: MAC Address from efuse = 00:14:d1:6c:52:19 [ 8.551625] usbcore: registered new interface driver r8712u [ 9.501351] r8712u: Loading firmware from "rtlwifi/rtl8712u.bin" [ 10.160471] r8712u: 1 RCR=0x153f00e [ 10.161241] r8712u: 2 RCR=0x553f00e leventov@leventov-ubuntu:~$ lsmod | grep 8712 r8712u 189049 0 t

    Read the article

  • Torrent clients suddenly stopped downloading

    - by Vasilis Baltikas
    A few days ago I noticed, that Transmission in my Ubuntu 10.04 machine suddenly couldn't download anything anymore. To overcome this I have uninstalled and reinstalled Transmission and tried downloading with other clients (Deluge, Vuze) well seeded torrents (ubuntu iso images for example) without success. On the same computer I have also installed Ubuntu 10.10 and Windows 7, which I rarely use. What makes the problem I encounter weirder, is the fact that downloading via torrents works fine in my Ubuntu 10.10 and Windows partitions but not in Lucid Lynx. Browsing the web for similar problems didn't give me answers. Any help would greatly appreciated.

    Read the article

  • Code review process when using GIT as a repository?

    - by Sid
    What is the best process for code review when using GIT? Current process: We have a GIT server with a master branch to which everyone commits Devs work off the local master mirror or a local feature branch Devs commit to server's master branch Devs request code review on last commit Problem: Any bug in code review are already in master by the time it's caught. Worse, usually someone has burnt a few hours trying to figure out what happened... So, we would like To do code review BEFORE delivery into the 'master'. Have a process that works with a global team (no over the shoulder reviews!) something that doesn't require an individual dev to be at his desk/machine to be powered up so someone else can remote in (remove human dependency, devs go home at different timezones) We use TortoiseGIT for a visual representation of a list of files changed, diff'ing files etc. Some of us drop into a GIT shell when the GUI isn't enough, but ideally we'd like the workflow to be simple and GUI based (I want the tool to lift any burden, not my devs).

    Read the article

  • Skype no sound on Kubuntu 13.10

    - by Michael Aquilina
    I just performed a fresh install of Kubuntu 13.10 on my machine. Everything is working great except for Skype. I cannot get any form of audio playback in Skype. In the sound settings panel I get a bunch of different sound sources, none of which work! At the moment I have set it to "sysdefault (unknown)" I installed it using the deb package found on the official website. My phonon backend is using phonon-gstreamer. When running skype from the terminal I get the following error messages: ALSA lib control.c:953:(snd_ctl_open_noupdate) Invalid CTL plughw:CARD=PCH ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave Is this a known problem or has anyone experienced the problem and managed to solve it?

    Read the article

  • SQL Auto Close Options

    - by Dave Noderer
    Found an interesting thing that others have run across but it is the first time I’ve seen it. A customer emailed to say that the SQL 2008 db that I had helped him with seemed to be going into recovery mode on a regular basis while watching the SQL Management Studio screen. Needless to say he was a bit nervous and about to take some drastic steps. Eventually he found that the Auto Close option was set to true. When this is set to true, the database automatically closes all connections and unlocks the mdf file after 300 milliseconds. When a new connection is made it spins backup… Great for xcopy deployment on a client machine but not a multi-user server based application. So the warning… if you have started a database with SQL express and then move it to a production SQL server, make sure you check that the Auto Close option is set to false. See options screen below:

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >