Search Results

Search found 8824 results on 353 pages for 'cloud virtualization vmware density scalability'.

Page 65/353 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Creating an email notification system based on polling database rows

    - by Ashish Sharma
    I have to design an email notification system based on the following requirements: The email notifications would be created based on polling rows in a Mysql 5.5 DB table when they are in a particular 'Completed' state. The email notification should be sent out in no more than 5 minutes from the time the row was created in the DB table (At the time of DB table row creation the state of the row might not be 'Completed'). Once 5 minutes for the DB table row expire in reaching the 'Completed' state, separate email notification need to be sent (basically telling the user that the original email notification would be delayed) and then sending the email notification as and when the row state reaches to being 'Completed'. The rest of the system requirements are : Adding relevant checks to monitor the whole system via MBeans interface. The system should be scalable so that if the rate of DB table rows creation increases so does the Email notification system be able to ramp up. So I request suggestions on following lines: What approach should I take in solving the problem described from a programming/Design pattern point of view? Suggestion for any third party plugin/software that can be used to solve the problem described? Points to take care regarding scalability and monitoring the health of the system? Java is the language of preference but I am open to using off the shelf components that can be interfaced with Java language or provide standard ports for communication. Currently I do have an in house grown system (written in Java) that is catering to the specified requirements, but it's now crumbling under increased load and now I want to give the problem a fresh look. thanks in advance Ashish

    Read the article

  • RadBook for Silverlight now supports virtualization

    I am proud to announce that RadBook, along with RadGridView, RadTreeView, RadTreeListView, RadChart and RadScheduler, now supports virtualization. With previous versions, it would take up to 16 seconds to load 1000 pages, where now it takes just 2 seconds to load a set of 10,000,000 (10 million) items.   The cause of the performance boost is the way RadBook handles the unnecessary(non-visible) elements. As you probably know, while turning a page, only four pages are visible at any given moment in time. Previous versions of RadBook would just collapse the unnecessary elements, which had a significant impact on the initial loading time. The new version of RadBook now takes advantage of the VirutalizingPanel and creates only as many elements as necessary for the book to render properly. Enjoy and if you have comments or questions on the topic, let us know.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • CMS and Databases vs. DIY

    - by hozza
    I have been programming for many years now, primarily in PHP and the like and would consider myself an intermediate programmer. Some of my online projects have now gone global and very widely used, i am now in deep thought about scalability etc. All of my systems so far are written in PHP, no known database structure such as MySQL etc. Instead our databases use an 'operating system style' method of storing information, files and folders if you will. We also do not use any outside/third-party software or CMS, so far this has work out extremely well. Most people, when they hear about the way we do things, criticize and say that is an idiotic idea but normally after seeing our systems in more dept are converted to our way of doing things. Is it really that bad to not use a standard databasing systems and only using the one (slightly heavier than others) language of PHP? How well on the face of it will this kind of setup scale? N.B. Our systems include things such as account and user management, documentation development and task/project managing.

    Read the article

  • Store scores for players and produce a high score list

    - by zrvan
    This question is derived from an interview question that I got for a job I was declined. I have asked for code review for my solution at the dedicated Stack Exchange site. But I hope this question is sufficiently rephrased and asked with a different motivation not to be a duplicate of the other question. Consider the following scenario: You should store player scores in the server back end of a game. The server is written in Java. Every score should be registered, that is, one player may have any number of scores for any number of levels. A high score list should be produced with the fifteen top scores for a given level, but only one score per user (to the effect that even if player X has the two highest scores for level Y, only the first position is counted and player Z has the second place). No information should be persisted and only Java 1.7+ standard libraries should be used. No third party libraries or frameworks are acceptable. With the number of players as the primary factor, what would be the best data structure in terms of scalability and concurrency? How would you access the structure to register a single score given a level and a player id? How would you access the structure to compile the high score list?

    Read the article

  • scalability with nginx, passenger, ruby on rails setup

    - by Dani Cela
    Hey guys I had a question regarding scalability for my RoR application. We have been optimizing our application over the last few days and after running blitz.io, notice that our application times out after maybe 1000 hits in 30 seconds we experienced massive timeouts. In the 1 minute test apparently 74% of users would have timed out. Look at the performance of my website: http://blitz.io/report/1c8eb2f395a5eadeabd62fd831ada9e5 Not saying that our website will in any way experience this now, but I wish to design the infrastructure to handle this. What is normally done in this situation? Currently we have one web server and one database server. Would load balancing be the route to go?

    Read the article

  • Linear Performance Scalability with HP San Solutions

    - by Berzemus
    Hi all, I need a San Solution with linear scalability in size as well as in performance. From what I know, with a Modular Smart Array solution such as the P2000/MSA-class solutions from HP, even with a dual controller initial node, I can only increase the size of it, as added nodes come controller-less, so overall performance tends to decrease. On the other hand, the P4000 (lefthand) family of solutions has each of it's nodes have it's own controller, and so when a node is added, storage capacity as well as performance increase. Am I right in all that I say, and is the P4000 the only solution, or have I forgotten something ?

    Read the article

  • VMware vSphere Hypervisor 5 with Intel SPL 5000 in Raid 0 no boot from DVD?

    - by Richard
    I hope this is the correct StackExchange, since I am only using StackOverflow for Web development, but need some help with my server configuration. I would like to install VMware vSphere Hypervisor 5 on my server here at home and run a view machines on it such as Windows Server 2008 and Red Hat. I used to have either OpenSuse or Windows Server 2008 installed but I would like to get into VMWare Hypervisor. My hardware configuration: - Intel S5000PSL with bios version S5000.86B.10.60.0091 build date 10/09/2008 as of read out of bios - E5420 @ 2.5GHz Intel Xeon CPU The Intel Virtualization Technology is enabled in the BIOS - DVD DH20A4P DVD Writer - 8GB ECC Ram I have configured a RAID 0 on my 2 WD 2TB SATA drives I have burned the Hypervisor 5 on an empty DVD and it is bootable, I tested it on my client PC. The main problem here is basically, that I cannot boot the DVD on my server. I have set the Boot Option to the DVD drive. I have booted from the BIOS straight in the DVD drive and it does not work. I do not see any error messages. The only thing I see are the PXE error messages when it tries booting from the network and other devices, obviously without any result. Does anybody know why I cannot boot the DVD? What could cause the problem? I have sucessfully installed Windows Server 2008 via original DVD about 1 year ago, so the DVD drive can read and does work. The DVD drive is available in the BIOS and I have checked all cables and none of them is loose in any way. I even see the light flashing but it does not want to boot from the DVD. I am looking forward to suggestions and things that I should check. Thank you very much

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • How can I get Virtual Server 2005 R2 running on Windows Server 2008 R2?

    - by Bret Fisher
    For various reasons (old VT-less hardware, and .vhd support) we need to still run Virtual Server 2005 R2. It's just for lab/demo work but we'd like to run the host on the newest Windows OS possible. It's documented and at least partially supported to run the old Virtual Server 2005 R2 SP1 on Windows Server 2008 (non-R2). I've done that before. I'm wondering if anyone has gotten the scenario in the title above to work. This post says it's possible but has anyone here actually done it before I go through that process: http://blogs.infosupport.com/blogs/ericd/archive/2009/08/31/running-virtual-server-2005-r2-sp1-on-windows-server-2008-r2.aspx

    Read the article

  • Bridging Virtual Networking into Real LAN on a OpenNebula Cluster

    - by user101012
    I'm running Open Nebula with 1 Cluster Controller and 3 Nodes. I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes. However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly. The Nodes all have a br0 interfaces which is bridged with eth0. The IP Address is in the 192.168.1.x range. The Template file I used for the vmnet is: NAME = "VM LAN" TYPE = RANGED BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address NETWORK_SIZE = 126 NETMASK = 255.255.255.0 GATEWAY = 192.168.1.1 NS = 192.168.1.1 However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and onevm list also states that the vm is running. It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.

    Read the article

  • KVM slow guest i/o

    - by Akarot
    Host: Debian 6.0 (squeeze) with qemu-kvm and libvirt from squeeze-backports ii qemu-kvm 1.0+dfsg-8~bpo60+1 ii libvirt-bin 0.9.8-2~bpo60+2 Has 3TB sata drives with software raid and lvm. It has a sequential write speed of ~140MB/s measured with dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync Elevator set to cfq Guest Debian 6.0 (squeeze) Uses LVM as storage. Drivers are virtio and cache='none' Sequential write speed is considerably slower with only 25-50MB/s Elevator set to noop I'm kind of running out of ideas for further tweaks but I'm sure that I/O speed should be much faster because many people are reporting almost native performance with lvm.

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014

    - by Kathryn Perry
    A Guest Post by Laura Vogel, Director, Oracle Marketing Cloud Events (pictured left) Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee SessionDell’s Hayden Mugford will speak on "The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party!We’re hosting CX Central Fest: a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are integrated across CX Target AudienceSession content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. OutcomesCustomers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways Resources At a Glance Register Now Track Site—View Marketing Sessions 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Focus on Session Doc Downloadable Justification Email OpenWorld is a fabulous way for you to see all that Oracle Marketing Cloud has to offer. Register today.

    Read the article

  • Is Microsoft&rsquo;s Cloud Bet Placed on the Ground?

    - by andrewbrust
    Today at the Unversity of Washington, Steve Ballmer gave a speech on Microsoft’s cloud strategy.  Significantly, Azure was only briefly mentioned and was not shown.  Instead, Ballmer spoke about what he called the five “dimensions” of the cloud, and used that as the basis for an almost philosophical discussion.  Ballmer opined on how the cloud should be distinguished from the Internet.as well as what the cloud will and should enable.  Ballmer worked hard to portray the cloud not as a challenger to Windows and PCs (as Google would certainly suggest it is) but  really as just the latest peripheral that adds value to PCs and devices. At one point during his speech, Ballmer said “We start with Windows at Microsoft.  It’s the most popular smart device on the planet.  And our design center for the future of Windows is to make it one of those smarter devices that the cloud really wants.”  I’m not sure I agree with Ballmer’s ambition here, but I must admit he’s taken the “software + services” concept and expanded on it in more consumer-friendly fashion. There were demos too.  For example, Blaise Aguera y Arcas reprised his Bing Maps demo from the TED conference held last month.  And Simon Atwell showed how Microsoft has teamed with Sky TV in the UK to turn Xbox into something that looks uncannily like Windows Media Center.  Specifically, an Xbox console app called Sky Player provides full access to Sky’s on-demand programming but also live TV access to an array of networks carried on its home TV service, complete with an on-screen programming guide.  Windows Phone 7 Series was shown quickly and Ballmer told us that while Windows Mobile/Phone 6.5 and earlier were designed for voice and legacy functionality, Windows Phone 7 Series is designed for the cloud. Over and over during Ballmer’s talk (and those of his guest demo presenters), the message was clear: Microsoft believes that client (“smart”) devices, and not mere HTML terminals, are the technologies to best deliver on the promise of the cloud.  The message was that PCs running Windows, game consoles and smart phones  whose native interfaces are Internet-connected offer the most effective way to utilize cloud capabilities.  Even the Bing Maps demo conveyed this message, because the advanced technology shown in the demo uses Silverlight (and thus the PCs computing power), and not AJAX (which relies only upon the browser’s native scripting and rendering capabilities) to produce the impressive interface shown to the audience. Microsoft’s new slogan, with respect to the cloud, is “we’re all in.”  Just as a Texas Hold ‘em player bets his entire stash of chips when he goes all in, so too is Microsoft “betting the company” on the cloud.  But it would seem that Microsoft’s bet isn’t on the cloud in a pure sense, and is instead on the power of the cloud to fuel new growth in PCs and other client devices, Microsoft’s traditional comfort zone.  Is that a bet or a hedge?  If the latter, is Microsoft truly all in?  I don’t really know.  I think many people would say this is a sucker’s bet.  But others would say it’s suckers who bet against Microsoft.  No matter what, the burden is on Microsoft to prove this contrarian view of the cloud is a sensible one.  To do that, they’ll need to deliver on cloud-connected device innovation.  And to do that, the whole company will need to feel that victory is crucial.  Time will tell.  And I expect to present progress reports in future posts.

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • kvm memory changes via virsh not propagating to vm

    - by kevintmckay
    Hi I just started using kvm on rhel6 and after creating a vm I tried to increase the memory but the changes I amde in the xml file do not propogate to vm, even after bouncing vm and restarting libvert? [root@kvm01 qemu]# virsh dominfo dev-kvm01 Id: 2 Name: dev-kvm01 UUID: 9b2bf581-2807-3116-b176-60e9c0559943 OS Type: hvm State: running CPU(s): 2 CPU time: 1975.3s Max memory: 7864320 kB Used memory: 7864320 kB Persistent: yes Autostart: disable Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c47,c760 (enforcing) [iknowmed@dev-kvm01 ~]$ free total used free shared buffers cached Mem: 3632284 3614508 17776 0 3980 3491676 -/+ buffers/cache: 118852 3513432 Swap: 5668856 0 5668856

    Read the article

  • KVM CLI install for CentOS 6.3 defaults to Minimal Install

    - by i.h4d35
    So I now I've installed KVM (and its associated tools and packages- libvirt, VMM etc.). On the GUI (i.e using the VMM), installation works as its supposed to. However, when I try to create a VM using the command line interface, the OS (I am working with CentOS 6.3) defaults to a Minimal Install instead of giving me options to choose from at the time of installation. I am trying to install using the following command: virt-install \ --connect qemu:///system \ --virt-type kvm --name testVM2 \ --ram 512 --disk path=/var/lib/libvirt/images/testVM2.img,size=8 --vnc \ --cdrom /media/db18de8e-0853-49fb-80de-5c794d58a46f/CentOS-6.3- x86_64-bin-DVD1.iso \ --network network=default Specifying the OS-type or the OS-variant parameters doesn't make a difference. Is there something that I am missing out on or some other parameter that I must specify? Thanks in advance.

    Read the article

  • Poor Write Performance in VM inside Proxmox PVE 2.0

    - by sorsenne
    I am running a PVE 2.0 on a decent Hardware (2 SATA HDDs as RAID1, 12GB RAM, i7 CPU) but the I/O Performance is very poor inside the VM (Ubuntu 11.10 Server). The very same VM was copied to another Server running simply Ubuntu Server with KVM and had better I/O Perf. this is how the HDD is shown in the Guest: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata1.00: ATA-8: ST3000DM001-9YN166, CC49, max UDMA/133 ata1.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA ata1.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA ST3000DM001-9YN1 CC49 PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 0:0:0:0: [sda] 4096-byte physical blocks sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA I tested with DD: $ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 19.2222 s, 7.0 MB/s on the Host, this same Test will result with 156 MB/s in average. PS: I am using VirtIO and see no error in dmesg.

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

  • KVM network bridge and public static IP for both host and guests

    - by Javier Martinez
    I have a Debian Server with 4 public static addresses. There is a KVM guest (also Debian) installed and running. What I want is to give the guest an IP of the host, so that both machines have public IPs. IP 1: 188.165.A.B IP 2: 178.33.CCC.D IP 3: 178.33.CCC.E IP 4: 178.33.CCC.F What should I do to have connection for host and guest ? This is network conf: # ifconfig br0 Link encap:Ethernet HWaddr e8:40:f2:0a:cc:28 inet addr:188.165.A.B Bcast:188.165.255.255 Mask:255.255.255.0 inet6 addr: fe80::ea40:f2ff:fe0a:cc28/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3618 errors:0 dropped:4 overruns:0 frame:0 TX packets:4853 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:599562 (585.5 KiB) TX bytes:1693443 (1.6 MiB) eth0 Link encap:Ethernet HWaddr e8:40:f2:0a:cc:28 inet6 addr: fe80::ea40:f2ff:fe0a:cc28/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4274 errors:0 dropped:0 overruns:0 frame:0 TX packets:4879 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:720045 (703.1 KiB) TX bytes:1715641 (1.6 MiB) Interrupt:20 Memory:fe500000-fe520000 eth0:0 Link encap:Ethernet HWaddr e8:40:f2:0a:cc:28 inet addr:178.33.CCC.D Bcast:178.33.255.255 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:20 Memory:fe500000-fe520000 eth0:1 Link encap:Ethernet HWaddr e8:40:f2:0a:cc:28 inet addr:178.33.CCC.E Bcast:178.33.255.255 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:20 Memory:fe500000-fe520000 eth0:2 Link encap:Ethernet HWaddr e8:40:f2:0a:cc:28 inet addr:178.33.CCC.F Bcast:178.33.255.255 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:27932 errors:0 dropped:0 overruns:0 frame:0 TX packets:27932 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1820862 (1.7 MiB) TX bytes:1820862 (1.7 MiB) vnet0 Link encap:Ethernet HWaddr fe:54:00:87:40:ec inet6 addr: fe80::fc54:ff:fe87:40ec/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:1452 (1.4 KiB) TX bytes:16958 (16.5 KiB) #route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default aa.bb.cc.eu 0.0.0.0 UG 0 0 0 br0 188.165.255.0 * 255.255.255.0 U 0 0 0 br0 # brctl show bridge name bridge id STP enabled interfaces br0 8000.e840f20acc28 no eth0 vnet0 There is no firewall enabled and DNS is configured properly. What I want to achieve: | | | +----+-------------------------+-+------+ | | Host | | | | | | | | | | +------------+------+ | | eth0 | eth0:0-1 | | | 188.165.A.B | | | | | | | | | br0 vnet0 | | | +------------+------+ | | | | | | | | +------------+------+ | | | | | | | | eth0:2-+ | | | | 178.33.CCC.F | | | | | | | | Guest | | | +-------------------+ | +---------------------------------------+ Thanks you

    Read the article

  • My uncle is the family historian. We need to host about 5-15 TB of images and video. Any inexpensive

    - by Citizen
    Basically we have hq scans of thousands of old family photos. Plus tons of family video. We want to host them where we can still have total control over the content and restrict access. I'm a php programmer, so the security is not an issue. What is an issue is finding a host to store 10 TB of data and not be paying a ton of money. We really are not planning on a lot of traffic. Maybe 1-10 visitors a day; family only. Kind of like an online library.

    Read the article

  • Oracle OpenWorld Live 2012 Videos

    - by Chris Kawalek
    The Oracle virtualization team is back from a very successful Oracle OpenWorld! Hopefully you were able to come to the show and talk with our virtualization experts at the demo booths or in our sessions. But if you didn't, you can get a summary of what we talked about from a number of short videos. In this post, we're going to highlight the Oracle OpenWorld Live videos, and in a future post we'll cover the videos we shot ourselves (once we get them all posted!). If you missed it, Oracle OpenWorld Live carried keynotes and interviews with all kinds of folks during the show. They also archived these segments so you can watch them at your leisure. I've gone through the videos and selected some that highlight virtualization: Edward Screven on mission critical clouds. Wim Coekaerts talks virtualization. Rex Wang on Oracle Cloud. Ronen Kofman on Oracle VM Templates. Chris Kawalek on Oracle's desktop virtualization software. Chris Kawalek discusses Oracle Sun Ray Clients. If we missed you this year, we hope to see you at OpenWorld 2013! -Chris 

    Read the article

  • Cloudify: bootstrap-localcloud: operation failed?

    - by quanta
    OS: Gentoo, CentOS Version: 2.1.0 Follow the quick start guide, I got the below error when running bootstrap-localcloud: cloudify@default> bootstrap-localcloud STARTING CLOUDIFY MANAGEMENT 2012-05-30 14:55:50,396 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; \ Caused by: org.cloudifysource.shell.commands.CLIException: \ Error while starting agent. \ Please make sure that another agent is not already running. Operation failed. What port Cloudify is using to check that agent is running? PS: it's working fine when running on Windows. UPDATE: Wed May 30 22:37:30 ICT 2012 Reply to @tamirkorem and @Itai Frenkel: I'm pretty sure because this is the first time I run that command on 2 servers. More clearly, here're the output: cloudify@default> teardown-localcloud Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? 2012-05-30 22:43:33,145 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown failed. Failed to fetch the currently deployed applications list. For force teardown use the -force flag. Operation failed. cloudify@default> teardown-localcloud -force Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? Failed to fetch the currently deployed applications list. Continuing teardown-localcloud. .2012-05-30 22:46:39,040 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown aborted, an agent was not found on the local machine. Operation failed. and this one is the detailed result: cloudify@default> bootstrap-localcloud --verbose NIC Address=127.0.0.1 Lookup Locators=127.0.0.1:4172 Lookup Groups=localcloud Starting agent and management processes: gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 STARTING CLOUDIFY MANAGEMENT 2012-05-30 22:36:12,870 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; Caused by: org.cloudifysource.shell.commands.CLIException: Error while starting agent. Please make sure that another agent is not already running. Command executed: /usr/local/src/gigaspaces-cloudify-2.1.0-ga/bin/gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 Reply to @Eliran Malka: there is no such process listening on port 4172: # netstat --protocol=inet -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN 2363/tor tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2331/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2293/cupsd

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >