Search Results

Search found 17990 results on 720 pages for 'virtualization option'.

Page 37/720 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Rails fields_for :child_index option explanation

    - by Timothy
    I have been trying to create a complex form with many nested models, and make it dynamic. Now I found that making a nested model isn't difficult with accepts_nested_attributes_for, but making it nested and dynamic was seemingly impossible if there were multiple nested models. I came across http://github.com/ryanb/complex-form-examples/blob/master/app/helpers/application_helper.rb which does it very elegantly. Could anyone shed some light on lines 13 and 16? 13 form_builder.object.class.reflect_on_association(method).klass.new and 16 form_builder.fields_for(method, options[:object], :child_index => "new_#{method}") do |f| From intuition, line 13 instantiates a new object, but why must it do so many method calls? I couldn't find any documentation for the :child_index option on line 16. When the form is created, a very large number is used as an index for new models, whereas existing models are indexed by their id. How does this work?

    Read the article

  • PHP pecl/memcached extension slow when setting option for consistent hashing

    - by HarryF
    Using the newer PHP pecl/memcached extension. Calls to Memcached::setOption() like; $m = new Memcached(); $m->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); are costing between 150 to 500ms - just in making the call to setOption() and as we're not using persistent connections but rather doing this on every request, it hurts. Delving deeper, setting Memcached::OPT_DISTRIBUTION to Memcached::DISTRIBUTION_CONSISTENT ends up calling update_continuum() in libmemcached which appears to be fairly intensive, although we're only passing a list of 15 memcached servers in, so somewhat surprising to see it take between 150 to 500ms to rebuild the continuum data structure. Could it be setting this option is only suitable for persistent connections, where it's called only once while making the initial connection? Or is this a bug libmemcached? Using the newer pecl/memcached extension 1.0.1 with libmemcached 0.38 Thanks.

    Read the article

  • GCC (ld) option to strip unreferenced data/functions

    - by legends2k
    I've written an program which uses a library which has numerous functuions, but I only limited functions from it. GCC is the compiler I use. Once I've created a binary, when I used nm to see the symbols in it, it shows all the unwanted (unreferenced) functions which are never used. How do I removed those unreferenced functions and data from the executable? Is the -s option right? I'm tols that it strips all symbol table and relocation data from the binary, but does this remove the function and data too? I'm not sure on how to verify this too, since after using -s nm doesn't work since it's stripped all sym. table data too.

    Read the article

  • rdoc and the "--accessor" option

    - by Brian Ploetz
    rdoc --help says: --accessor, -A accessorname[,..] comma separated list of additional class methods that should be treated like 'attr_reader' and friends. Option may be repeated. Each accessorname may have '=text' appended, in which case that text appears where the r/w/rw appears for normal accessors. Does anyone have any working examples of doing this (both the accessor method definition and the rdoc command invocation)? No matter what combination I try, my accessors will not show up in the RDoc output. Thanks.

    Read the article

  • Don't see Clean solution option in Visual Studio

    - by user102533
    For one of the solutions, I don't see the Clean Solution option neither in the context menu when I right click on the solution name in the Solution Explorer nor in the Build menu. When I make any changes to the project and debug, VS never hits the break point and I get the "The breakpoint will not currently be hit. The source code is different from the original version." message. My understanding is that I need to clean the solution. For other solutions, I do see the Clean solution and I don't have the same issue.

    Read the article

  • Eclipse 3.5 Missing New Web Application option after installing Google App Engine plugin

    - by stevebot
    Hey all, I just used Eclipse 3.5 to install the Google App Engine plug in. The plug in is showed as installed in the update manager. However, I am not seeing the option to "New Web Application Project" (http://code.google.com/appengine/docs/java/tools/eclipse.html). I also don't see anything Google related when I type Google into the search bar under Windows Preferences. There were no errors at the time of installation, and I was asked if I wanted to restart Eclipse, clicked yes, and it restarted accordingly. Am I missing something?

    Read the article

  • Why dont Android applications provide an "Exit" option?

    - by Howiecamp
    Is there something in the Android developer guidelines that disuadea developers from providing the option to "exit" (stop running) an application from within the application itself? I love multitasking and all but it's not clear to me why: the vast majority of apps don't have their own Exit functions and hence just keep running forever don't give you a choice about running when you turn on the phone - they just do by default Both of these things lead to memory usage constantly increasing and your device running with this performance burden all of the time despite the fact that you may only want certain apps to run some of the time. Am I missing something?

    Read the article

  • Error when using --static option with macrubyc

    - by Jakub Hampl
    I want to create a binary executable for a relatively simple script that would not require people to install macruby or HotCocoa. The script is here. I've understood that I want to use the --static option for the compiler and I'm using the following command: macrubyc -o postprocessor --static postprocessor.rb I get the following error: ld: library not found for -lLLVMBitWriter collect2: ld returned 1 exit status Error when executing `/usr/bin/g++ -o "postprocessor" -arch x86_64 -L/Library/Frameworks/MacRuby.framework/Versions/0.6/usr/lib -lmacruby-static -L/usr/local/lib -lpthread -lffi -lm -lLLVMBitWriter -lLLVMX86CodeGen -lLLVMX86Info -lLLVMSelectionDAG -lLLVMAsmPrinter -lLLVMJIT -lLLVMExecutionEngine -lLLVMCodeGen -lLLVMScalarOpts -lLLVMTransformUtils -lLLVMipa -lLLVMAnalysis -lLLVMTarget -lLLVMMC -lLLVMCore -lLLVMSupport -lLLVMSystem -lpthread -ldl -lxml2 -lobjc -lauto -licucore -framework Foundation "/var/folders/wU/wUGgoG1JGeKBgwalWLPMAU+++TI/-Tmp-/main-72203.o" "./postprocessor.o"' What should I do to get this running?

    Read the article

  • Html.DropDownListFor() in Mozilla Firefox

    - by Andrey
    I'm rendering a drop down list using Html.DropDownListFor() extension. The markup I get is as follows: select id="NationalityId" name="NationalityId" option value="" /option option selected="selected" value="1"Estonian /option option value="2"Russian /option option value="3"Ukranian /option option value="4"Belorussian /option option value="5"Swedish /option option value="6"Dutch /option /select As you can see, option with value==1 is selected. But in Firefox 3.6.3 it doesn't display as selected, empty string (first option - value == "") is displayed instead. IE7 and Chrome render the page right - the option is selected. Does anybody know what is going on? How do I get this around?

    Read the article

  • /clr option in c++

    - by muhammad-aslam
    hello friendzz plz give me a solution for this error "fatal error C1190: managed targeted code requires a '/clr' option" HOw can i resolve this problem?? My configuration is .. Visual studio 2008 windows 7 Here is the code (i got by using net resources) using using namespace System; using namespace System::IO; int main() { // Create a reference to the current directory. DirectoryInfo* di = new DirectoryInfo(Environment::CurrentDirectory); // Create an array representing the files in the current directory. FileInfo* fi[] = di-GetFiles(); Console::WriteLine(S"The following files exist in the current directory:"); // Print out the names of the files in the current directory. Collections::IEnumerator* myEnum = fi-GetEnumerator(); while (myEnum-MoveNext()) { FileInfo* fiTemp = __try_cast(myEnum-Current); Console::WriteLine(fiTemp-Name); } } PLZZZZZZZZ

    Read the article

  • Tortoise SVN does not give option to "Add to SVN"

    - by Clay Nichols
    I've created an SVN repository and added folders and added contents and Committed. No problem. But when go to add a new folder (the others were on the P:\ drive, now I want to add our website which is on the C:\ drive) but Tortoise doesn't give me the option of Adding a folder. I have no idea why. Help file shows the instructions I'd expect ("right click on the folder you want to add and choose +Add...") but Add... isn't in the menu. This is TortoiseSVN v 1.6.7.18415 (I'm about to update it but I was able to add folders before so I don't think this is just a bug, I think maybe I'm missing something obvious).

    Read the article

  • Robocopy for Windows 2003 doesn't support /DST option

    - by Jon
    Does anyone know if it is possible to download the latest robocopy for Windows 2003. The latest version provides the /DST option which ignores time stamps changed due to BST (British Summer Time). Every time we do a build and sync our servers when we go +1/-1 hour it takes hours instead of minutes because it sees everything as changed. I noticed it is included automatically with Vista/Win7 but the Resource toolkit that I downloaded doesn't include a new version of robocopy for Win Server 2003. If there is a place to download it from & will it also work on Windows Server 2003? Thanks.

    Read the article

  • Visual Studio 2010 - Export (Project) Template menu option grayed out

    - by Jakobud
    In Visual Studio, I want to make a simple C++ project and export it out as a template, so I can use the template to start new projects to save me time. But the Export Template menu option is always grayed out. I've not once been able to click it. Anyone know why? Anyone know how to accomplish what I need (besides the obvious "make a copy of an existing project in explorer")? It seems like project templates should be a no-brainer feature for VS. This seems to be the case for Visual Studio 2005, 2010 (I probably 2008 as well I haven't checked).

    Read the article

  • clearcase option for view movement from one host path to another

    - by wrapperm
    Hi all, I have created a clearcase dynamic view for my development by name "view1". I have mistakenly selected the view storage location as a local PC in my network, that was made sharable by the PC owner. I was suppose to select the view storage location to be a server. Now, the issue is that I have done lot of development with the view that I have created and have plenty of view DO's and view private files in it. So I'm ruling out the option of deleting the view from the PC local storage (host path) and then creating another view in the server with the same config spec. Please, let me know if there is any method of editing the view properties (or doing something else) by which I could be able to move the view to the server (with all the DO's and view private files retained) Thanks in advance, Rahamath

    Read the article

  • Need help with jQuery UI Accordion navigationFilter option

    - by theJBRU
    I'm building an accordion for navigation. Each section of the accordion has a set of links. The firing code looks like this: $(document).ready(function() { $(".selector").accordion({ collapsible: true, active: false, navigation: true }); }); This all worked fine and dandy until one of the links in each set was edited to point to a single file, call it foo.html. So now if you navigate to foo.html, the location.href matches every section of the accordion (since each section has a link to it) and that opens all the sections, defeating the purpose of the accordion. So I'm pretty sure I need to use the navigationFilter option but I've googled the living hell out of it and haven't found any examples of how to build the function associated with it. Help me, Stack Overflow!

    Read the article

  • Trying to use the "Use Specific Printer" option in Access 2007

    - by garynei
    I am trying to set a report to use a specific printer. I go into design mode, click on the page setup ribbon, click the page setup bottun, go into the page tabt, click the option to choose a specific printer, and then click the printer button to choose the printer I want to use. I save the steps and exit out of the report, but it still goes back and prints from the default printer. Why? I had no problems with this feature in 2003, why am I having problems in 2007. Any suggestions on how to fix this problem would be greatly appreciated, thanks.

    Read the article

  • Dropdown list with a "Select All" option

    - by shawnboy
    I have three dropdown boxes, Region, District, and City. I want my District dropdown to have a "Select All" option so the user can get all Cities in the Region, else just display the City based on the selected District. My query looks like this: IF @district =-2 THEN (SELECT DISTINCT city FROM myTable WHERE RIGHT(Region, 3) = ?) ORDER BY city) ELSE (select DISTINCT city FROM myTable WHERE District = ?) Order by city I'm using vb.net/sql I couldn't find any complex case scenarios in my search either. Any sugguestions would be appreciated!

    Read the article

  • No exit option if something is running

    - by max
    How can I catch something going on if the user chooses the exit option from a menu? I mean I'd like to be able to manage the event of a user who is going to close the application but some activity is being performed so he/she shouldn't be able to exit. Here's the code I wrote. In a nutshell : recording is being performed -- user clicks exit -- WARNING "Process is running you can't do it"(The process goes on) nothing is running -- user clicks exit -- application closes Is it possible to solve the problem by just adding a few lines of code without having to rewrite the entire program? thanks very much in advance. MAX exitAction = new AbstractAction("Exit") { public void actionPerformed(ActionEvent e) { System.exit(0); } }; exitAction.putValue(Action.NAME, "Exit"); // description exitAction.putValue(Action.SHORT_DESCRIPTION, "Exit application");

    Read the article

  • ESXi Server with 12 physical cores maxed out with only 8 cores assigned in virtual machines

    - by Sam
    I have an ESXi 5 server running on a 2-processor, 12-core system with hyperthreading enabled. So: 12 physical cores, 24 logical ones. On this server are 4 Windows 7 VMs, each configured for 2 processors, each running VMware Tools. Looking at my stats in vSphere, my "core utilization" is constantly maxed out. Yes, these machines are working hard, but only 8 cores have been allocated. How is this possible? Should I look into reducing the processor count per machine as in this post: VMware ESX server? I checked to ensure that hardware virtualization is enabled in the BIOS of the machine (a DELL R410). I've also started reading up on configuration, but being a newbie there's a lot of material to catch up on. It also seems I should only bother with advanced settings and pools if I'm really pushing the load, and I don't think that I should be pushing it with so few VMs. I suspect that I have some basic, incorrect configuration setting, but it's also possible that I have some giant misconceptions about virtualization. Any pointers? EDIT: Given the responses I've gotten so far, it seems that this is a measurement problem and not a configuration problem, making this less critical. Perhaps the real question is: How does the core utilization of the server reach a higher percentage than all individual cores' core utilization, and given that this possibility makes the metric useless for overall server load, what is the best global metric for measuring CPU load on hyper-threaded systems?

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Host system resets (crashes) when using VMWare or VirtualBox and 64-bit guest systems

    - by sinni800
    I have been trying to install virtual systems on VMWare for a while now and encountered strange behaviour from my PC. The behaviour is as follows: On "automatic" virtualization mode it either outputs a cryptic error message (can MAYBE give later, if I can reproduce) right on startup (before even the BIOS) or it resets the complete HOST system (blackscreen, bios...) If I install a Windows XP on it it works well on "binary translation" mode. If I try installing Linux on it, in "binary translation" mode it crashes 1 or 2 seconds after I hit enter on the GRUB selection screen (after the first page of kernel messages rolled in) Using VirtualBox it crashes right in the BIOS. It gave me a Bluescreen though! 0x00000101: CLOCK_WATCHDOG_TIMEOUT: a clock interrupt was not received on a secondary processor within the allocated time interval NEWS: I tried VirtualBox again and it did not completely crash the computer this time. It gave me a critical error and a log file: http://pastebin.com/yKZSDs91 In conclusion, it will crash instantly if VT-x is activated. If not, it seemingly only crashed if I try to install something with 64 bits. Another update: Yes, it ONLY crashes when the guest is 64 bit! What I tried: Reinstalling Windows (my Windows installation was quite broken so it seemed natural. Didn't work though.) New BIOS What I am certain of: Virtualization extensions are activated in the BIOS What my computer specs are: ASUS P8P67 LE mainboard, newest BIOS/EFI firmware Intel Core i5 2500k Ati Radeon HD 5770 16 GB Corsair 1333mhz DDR3 RAM, 4 X 4 GB

    Read the article

  • Running Windows 7 physical disk virtualized under Linux

    - by CajunLuke
    I have an existing Windows 7 installation that I'd like to virtualize under Linux. Windows boots fine on Disk A, Linux boots fine on Disk B. (Both disks are SATA.) I can mount the Windows disk when in Linux. I've tried VirtualBox and VMWare Player and neither will allow me to boot from the other disk. VirtualBox doesn't seem to have the option to do so. VMWare Player has the option to have an IDE drive exposed to the virtual environment as a SCSI disk. I've tried that, but it throws the error "Cannot connect virtual device ide1:0 because no corresponding device is available on the host." I've verified that it's pointing to the correct hard drive. I'm willing to try other virtualization products, and I'm not averse to spending a little money to get this to work. I've seen this other question, and it's not a duplicate, as I haven't gotten that far yet. I'm also interested in solutions going the other way (Linux on Windows), but that'd be lagniappe. Gory Hardware Details: Lenovo T410, 2.4 GHz Core i5 (has virtualization extensions), 4GiB RAM, 2x 320 GiB SATA HDD, one in optical bay. Fedora 14 2.6.35.10-74.fc14.x86_64, Windows 7 32-bit.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >