Search Results

Search found 5287 results on 212 pages for 'physical computing'.

Page 31/212 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • Can KVM CPU assignment count differ from physical hosts CPU count?

    - by javano
    I have read this question. I knew already that I could for example, have a quad core machine with four guests each having two vCPUs. As they don't all be require 100% CPU usage all the time, the scheduler would handle this for me. My question is about how this relates to a fail-over or migration situation; If host1 has two dual-core CPUs, and I assign guest1 four vCPUs (so it accessed all four physical cores), what will happen if I try and migrate it to host2 which only has one dual-core CPU? Can qemu-kvm emulate more vCPUs than there are physical? Or would I have to shut down the virtual machine, change the CPU assignment, migrate it, and then boot it back up (so no live migration)? Many thanks.

    Read the article

  • How to make a DHCP server on virtual machine serves other virtual machines(on different physical machines)?

    - by Tony
    I'm building a virtual cluster with VirtualBox and Opensuse. I have 10 physical machines and need several vms on each. The virtual machines are supposed to be in a "private" network, but still have internet access. I was asked to set up a virtual head node working as DHCP server. I installed DHCP server on the virtual head node and it seems works. On VirtualBox I set 2 network adapters to the head node, one bridged adapter and one internal network. One vm on the same physical machine has been set nic as internal network adapter. The vm can get IP address (so DHCP works) but can't access internet. What should I do? Specifically, what network adapter should I choose for head-node and work-nodes in VirtualBox? What in the virtual machines should I do?

    Read the article

  • Is it possible to define a virtual directory in IIS and make the files relative to the physical dir

    - by Mikey John
    Is it possible to define a virtual directory in IIS and somehow make the files in that directory relative to the physical directory and not to the virtual directory ? For instance on my server I have the following folders: D:\WebSite\Css\myTheme.css, D:\WebSite\Images\image1.jpg I created a virtual directory on IIS resources.mysite: Inside my website I reference the sheet like this resources.mysite/myTheme.css But inside myTheme.css I reference pictures from ../Images/images1.jpg. So the problem is that image1.jpg is not found because it is relative to the physical folder and not the virtual folder on IIS. Can I solve this problem without modifying the style sheet ?

    Read the article

  • What hardware is at physical address 0x80000000 on powerpc New World Macintosh?

    - by tinkerer
    Open Firmware device tree gives no clue what device might decode at physical address 0x80000000 to 0x8008200 on a G4 New World Macintosh. The mmu has three adjacent Virtual=Real translations for that block. They are the only address translations reported between the top or physical dram at 20000000 and the start of the PCI bridges at f0000000. (A possible clue is that frame-buffer-addr is reported as 9c008000 by Open Firmware, and that is not in the reported translation table either). I believe the architecture has been around since about 1999.

    Read the article

  • Does HyperV allow binding physical NIC on virtual machine with promiscues mode?

    - by MadBoy
    I have HyperV on Windows 2008 Enterprise R2 installed with some Virtual Server running that I wanted to have ISA or NTOP to monitor traffic. I've added additional physical NIC to server and wanted to use this NIC as traffic monitor (I've enabled port mirroring on switch). I can see on physical machine that runs HyperV a lot of traffic coming to the NIC so port mirroring works fine. However in virtual machine even thou I've assigned that NIC to only this one and only server it sees 0 packets. In VWMare Workstation it worked without problem and I could see mirrored traffic on VM. Should this be possible or HyperV is crippled?

    Read the article

  • Can IP v4 and IP v6 share a single physical Ethernet?

    - by sleske
    I keep reading about the transition from IP v4 to IP v6, and the possible advantages and problems. One thing that keeps popping up is "dual-stack" networking, meaning (I believe) a host can speak both IPv4 and IPv6. I don't quite understand how this works, however. Can a host actually transmit using IPv4 and IPv6 at the same time over the same physical Ethernet (like e.g. HTTP and FTP can be used simultaneously)? Or is the physical network strictly IPv4 or IPv6, with the "other" protocol sent via tunneling?

    Read the article

  • How to know if a server is physical or virtual? [duplicate]

    - by tachomi
    This question already has an answer here: is there anyway to know if your supposedly fully dedicated server is really a virtually resource-shared machine? [duplicate] 5 answers How can I know if my provider is giving me a dedicated physical server or if I'm getting virtual servers? The provider is from an other country so I don't have the chance to go and see the servers by myself, all of them have linux and they're managed by SSH. All the hire has been done by internet or phone. Are there any commands, dirs to analyze or a way to get this info? They of course tell me that the servers are physical, but when they make some support or hardware/software upgrade or that kind of stuff doesn't take a long. For example, when I ask for a new one with specific requirements at the end they give me more infrastructure than the one I asked for. Of course no one gives extra things without an extra price. Hope I have hinted

    Read the article

  • General purpose physics engine

    - by Lucas
    Is there any general purpose physics engine that allows huge simulations of rigid bodies? I'm using PhysX from Nvidia, but the focus of this engine is game development, soft bodies. I want to know if exists physics engine that runs on top of PS3 cell processors or CUDA cores allowing massive scientific physics simulations.

    Read the article

  • "Unable to associated Elastic IP with cluster" in Eclipse Plugin Tutorial

    - by Jeffrey Chee
    Hi all, I am currently trying to evaluate AWS for my company and was trying to follow the tutorials on the web. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2241 However I get the below error during startup of the server instance: Unable to associated Elastic IP with cluster: Unable to detect that the Elastic IP was orrectly associated. java.lang.Exception: Unable to detect that the Elastic IP was correctly associated at com.amazonaws.ec2.cluster.Cluster.associateElasticIp(Cluster.java:802) at com.amazonaws.ec2.cluster.Cluster.start(Cluster.java:311) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.launch(ElasticClusterBehavior.java:611) at com.amazonaws.eclipse.wtp.Ec2LaunchConfigurationDelegate.launch(Ec2LaunchConfigurationDelegate.java:47) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:696) at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3051) at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3001) at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:300) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Then after a while, another error occur: Unable to publish server configuration files: Unable to copy remote file after trying 4 timeslocal file: 'XXXXXXXX/XXX.zip' Results from first attempt: Unexpected exception: java.net.ConnectException: Connection timed out: connect root cause: java.net.ConnectException: Connection timed out: connect at com.amazonaws.eclipse.ec2.RemoteCommandUtils.copyRemoteFile(RemoteCommandUtils.java:128) at com.amazonaws.eclipse.wtp.tomcat.Ec2TomcatServer.publishServerConfiguration(Ec2TomcatServer.java:172) at com.amazonaws.ec2.cluster.Cluster.publishServerConfiguration(Cluster.java:369) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.publishServer(ElasticClusterBehavior.java:538) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:866) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Can anyone point me to what I'm doing wrong? I followed the tutorials and the video tutorials on youtube exactly. Best Regards ~Jeffrey

    Read the article

  • Hadoop safemode recovery - taking lot of time

    - by Algorist
    Hi, We are running our cluster on Amazon EC2. we are using cloudera scripts to setup hadoop. On the master node, we start below services. 609 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode' 610 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start secondarynamenode' 611 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker' 612 613 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop dfsadmin -safemode wait' On the slave machine, we run the below services. 625 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode' 626 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker' The main problem we are facing is, hdfs safemode recovery is taking more than an hour and this is causing delays in our job completion. Below are the main log messages. 1. domU-12-31-39-0A-34-61.compute-1.internal 10/05/05 20:44:19 INFO ipc.Client: Retrying connect to server: ec2-184-73-64-64.compute-1.amazonaws.com/10.192.11.240:8020. Already tried 21 time(s). 2. The reported blocks 283634 needs additional 322258 blocks to reach the threshold 0.9990 of total blocks 606499. Safe mode will be turned off automatically. The first message is thrown in task trackers log because, job tracker is not started. job tracker didn't start because of hdfs safemode recovery. The second message is thrown during the recovery process. Is there something I am doing wrong? How much time does normal hdfs safemode recovery takes? Will there be any speedup, by not starting task trackers till job tracker is started? Are there any known hadoop problems on amazon cluster? Thanks for your help. Regards Bala Mudiam

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • Open source alternative to MATLAB's fmincon function?

    - by dF
    Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / NumPy / SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • Software tools for allowing end users to reprogram interfaces

    - by iceman
    What would be the examples of commercial software products (specially web-services) which allow the end user to reprogram their user-interface? I mean end users who do not know programming and they are allowed to add more functionality. One way of doing it is allowing XML gadgets like iGoogle does. What are the other approaches and also the technologies enabling them? This would be a futuristic application like collaborative software development for users.

    Read the article

  • Hadooop map reduce

    - by Aina Ari
    Im very much new to map reduce and i completed hadoop wordcount example. In that example it produces unsorted file (with key value) of word counts. So is it possible to make it sorted according to the most number of word occurrences by combining another map reduce task to the earlier one. Thanks in Advance

    Read the article

  • Draw on screen border in Commodore 64

    - by Stefano Borini
    Ok. I hope it does not get closed because I have this curiosity since 25 years and I would love to understand the trick. In the commodore 64 the border was not addressable by the 6569 VIC. All you could do was to draw pixels in the central area, the one where the cursor moved. The border was always uniform, although you could change its color with poke 53280,color if i remember correctly. Nevertheless I clearly remember games intros where the border was featured with graphics, like it was fully addressable. I tried to understand how it worked but never got to the point. legends say it was a clever use of sprites, which could, under some circumstances, be drawn on the border, but I don't know if it's an urban legend. edit: just read this from one of the provided links Sprites were multiplexed across vertical raster lines (over 8 sprites, sometimes up to 120 sprites). Until the Group Crest released Krestage 3 in May 2007 there was the common perception that no more than 8 sprites could appear at one raster line, but assigning new Y coordinates made it reappear further down the screen. This is evil.... you beat the raster and reposition the sprite before it gets there...

    Read the article

  • 404 redirect with cloud storage

    - by Jeremy DeGroot
    I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image. We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

  • Retro video games programming

    - by SebKom
    I just watched the Super Mario Bros. -1 World glitch in youtube and I really began wondering about the code behind those games. Which language was used? What about the OS for the video games consoles? Are there any website with resources about this subject? (I am a 90s video gamer so I am particularly interested about the programming behind those games but feel free to make this a wiki and include links to resources about video games programming in general, if you want)

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • Where can I get resources for developing for Mac OS Classic?

    - by Benjamin Pollack
    I recently got bored and fired up my old Mac OS Classic emulator, and then got nostalgic for writing old-school applications for the system. So, my question: Where can I get dev tools that can still target Classic? (Ideally free, since this is just for fun, but if grabbing a used version of CodeWarrior on eBay is the best way to go, so be it.) Where can I get at least reference materials so I don't have to guess-and-check my way around Carbon/the System Toolbox? Are there any forums still running that would be open to answering old-school Mac questions for when I get stuck? This is purely for fun, so don't worry about how impractical this is. I know.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >