Search Results

Search found 3590 results on 144 pages for 'ben guest'.

Page 7/144 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Stop Time Sync Between Host and Guest - VirtualBox

    - by KoopaTroopa
    So I've been googling and I've tried the following commands. I want to stop the virtual machine from having the correct time and just to stick with what time it last recorded. The host operating system is Ubuntu 12.04 and the guest is Windows XP. I've turned time syncing off in XP so it won't do that when I connect to the Internet. However it does appear to take the time from ubuntu and set it as it's own. VBoxManage setextradata XP11 “VBoxInternal/TM/WarpDrivePercentage” 200 vboxmanage setextradata XP11 "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1" Both commands haven't work as the time is always set to the exact same as Ubuntu's. I've located the extradata entry in the virtualbox XML file. It states both changes that the commands above are set to make. But of course it still hasn't stopped updating the time. <ExtraData> <ExtraDataItem name="GUI/LastGuestSizeHint" value="1152,864"/> <ExtraDataItem name="GUI/LastNormalWindowPosition" value="74,52,1152,911"/> <ExtraDataItem name="VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" value="1"/> <ExtraDataItem name="&#x201C;VBoxInternal/TM/WarpDrivePercentage&#x201D;" value="200"/> </ExtraData>

    Read the article

  • Need help to install persistent Ubuntu on USB drive

    - by Junior
    I am a new user of Ubuntu. I would like to install a persistent Ubuntu 11.04 to my USB stick, and it should be able to work as a guest OS running on Windows so that I can boot it on other computers other than the one which I used for the installation. I have used several creators such as unetbootin, however from my understanding it can only create Live Linux which I am unable to save my configurations and files. If it's possible I would like to bypass the BIOS, that is to say that I can just load from the virtual machine without having to restart the computer. Thanks in advance!

    Read the article

  • Windows 8 with LiveID login authenticates as Guest to remote SQl Server

    - by Tim Long
    I have a network where several users are using Office Accounting 2009 in multi-user client/server mode. OA is built on SQL Server. One PC acts as the 'server' and has the SQl Server instance, the others have only the application installed and no SQL instance, all of the apps connect remotely to the SQL instance on the 'server'. I'm using the term 'server' loosely here, it is just a normal workstation that happens to be designated as the server and runs the SQL instance. There is no NT domain, all user accounts are local accounts. The way that OA works in multi-user mode is that each user is required to have a local account with the same username and password on both the client and 'server' PCs. This has been working well, no along comes Windows 8. I use my 'Microsoft Account' aka LiveID to log into Windows 8. Office Accounting runs fine and attempts to connect to the database, but fails, 'you do not have permission to perform this operation'. In the SQL logs, I get this error: 2012-10-28 17:54:01.32 Logon Error: 18456, Severity: 14, State: 11. 2012-10-28 17:54:01.32 Logon Login failed for user 'SERVER\Guest'. Reason: Token-based server access validation failed with an infrastructure SERVER is the hostname of the server. So it seems to be authenticating as 'Guest'?? To verify this, I enabled the Guest account on the 'server' PC and then added Guest as an allowed user within Office Accounting (this simply creates the user in SQL and gives it an appropriate database role). Sure enough, My Windows 8 PC was then able to connect to the database when using Office Accounting. Clearly, having users authenticate as 'Guest' stinks from a security and auditing standpoint. So what I need are some ideas for how to work around this. I've tried switching the Windows 8 PC to a 'local account' and that works too, but requires giving up significant functionality on the Windows 8 PC. What I really need is a way to force the Windows 8 PC to use a specific set of credentials when connecting to the remote SQL instance. Office Accounting takes the logged in username, which is my LiveID and doesn't correspond to any Windows user name. Anyone solved this issue?

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • Some guest networking and VMware Tools functionality broken with Sprint SmartView on the host

    - by Mads
    Using VMware Workstation 6.5.3 on Vista 64-bit. I started having problems with VMware networking about 6 months ago after upgrades to Sprint SmartView. I did not have problems previously, but I don't know if that is because I was lucky. The main symptoms of the problem when SmartView is installed are: I can no longer drag files from the host to copy them to the guest. When they are dragged, the disallowed cursor (the circle with a slash) shows in the guest. If I try to enable shared folders in the guest while it is running, I will not be able to see the shared files and will be informed that networking is not working. I can still ping guests from the host and I can still access network services via NAT most of the time when connected via my USB broadband adapter. When I configure shared folders so they are "always enabled" (with a mapped drive), I can access files on the via the mapped folders. I can also copy the file on the host and then paste it in the guest, as was suggested in some other threads concerning drag-and-drop problems that I found. The VMware Tools icon is showing in all cases and I don't see any obvious errors in the host's event viewer. If I uninstall SmartView, the problems disappear. If SmartView (current version is 2.28.0082) is reinstalled I will experience the same problems. I have tried uninstalling/reinstalling VMware and SmartView in various ways but it appears tha these problems are consistent when SmartView is installed (not just when it is running or connected, but when it is present on the system). I'm wondering if this is a combination of software (WS 6.5.3, Vista64, and SmartView) that works for other people, which would indicate a problem that is peculiar to my configuration.

    Read the article

  • Running Debian as guest operating system on a Hyper-V VM

    - by kce
    Hello. Layer-9 considerations are prompting a migration from Citrix XenServer to Hyper-V as our shop's virtualization platform of choice. This will require me to migrate our existing virtual machines from XenServer to Hyper-V. A hand full of these VMs are running Debian. Unfortunately, Debian does not seem to be on the list of approved/supported guest operating systems. In fact it seems that running Debian as a guest operating system of is rather difficult, although apparently not impossible. I have two interrelated questions: Does anyone have any experience running a Debian guest on Hyper-V? Is it one of those things where it just will not work at all or is more along the lines of "it will probably work fine, but we won't support it". Any experience here, positive or negative, would be helpful. How much of a bad idea is it to deviate from Hyper-V's list of supported guest operating systems? Again, is it either basically asking for Bad Things (TM) to happen or is just another instance of "it will probably work fine, but we won't support it"? Or is it somewhere in the middle? Thank you.

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • Windows VirtualBox failed to attach USB device to Linux Guest

    - by joltmode
    I have Windows 7 64bit Host system, and I am using VirtualBox 4.1.18 (r78361). I have an Arch Linux Guest OS. I have installed VirtualBox Extension Pack (to enable USB2 support) and added my USB device filter to VM. I have also installed the Guest Additions provided by Arch: virtualbox-archlinux-additions (but I have no idea whether it's actually needed for my environment). I can see my USB device from VirtualBox Devices menu. Whenever I am trying to access it, I end up with: Failed to attach the USB device Kingston DT 100 G2 [0100] to the virtual machine Archlinux. USB device 'Kingston DT 100 G2' with UUID {a836ec33-0f41-4ca7-a31d-09cceaf5d173} is busy with a previous request. Please try again later. Details ? Result Code:    E_INVALIDARG (0x80070057) Component:      HostUSBDevice Interface:      IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee:         IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} From what I have googled, most guides shows how to solve this the other way around - Linux Host to Windows Guest. How do I resolve this? Update I have tried to Eject (virtually, not physically) the device from my Windows Host system and then try to access the Device from Guest. Same error.

    Read the article

  • How to setup ping between XP guest from Win8 Host using Hyper-V virtual swtich

    - by rism
    Hyper-V client is installed on a Win8 Pro 64 bit box and a VM running XP has been created within that with an internal virtual swtich. The VM can be booted and accessed and there is a default virtual NIC within it with dynamic IP of 169.254.x.x which i have changed to be a static IP of 192.168.0.12/255.255.255.0 confirmed via ipconfig on the XP guest. The Host has IP of 192.168.0.7/255.255.255.0. Both host and guest have their firewalls disabled for simplicity. I cant ping guest from host nor host from guest. TTL timeout. And with regard to Hyper-V and VMs I dont know what to do next. Both are in same workgroup (as per name) but since they cant ping I guess that means nothing. .... My objective is to share a folder on VM so I can install a 32bit accountancy app that wont run on Win8/7 so if there is a more simplistic way then Im all ears but typically a peer to peer is very simple.

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL SERVER – Guest Post by Sandip Pani – SQL Server Statistics Name and Index Creation

    - by pinaldave
    Sometimes something very small or a common error which we observe in daily life teaches us new things. SQL Server Expert Sandip Pani (winner of Joes 2 Pros Contests) has come across similar experience. Sandip has written a guest post on an error he faced in his daily work. Sandip is working for QSI Healthcare as an Associate Technical Specialist and have more than 5 years of total experience. He blogs at SQLcommitted.com and contribute in various forums. His social media hands are LinkedIn, Facebook and Twitter. Once I faced following error when I was working on performance tuning project and attempt to create an Index. Mug 1913, Level 16, State 1, Line 1 The operation failed because an index or statistics with name ‘Ix_Table1_1′ already exists on table ‘Table1′. The immediate reaction to the error was that I might have created that index earlier and when I researched it further I found the same as the index was indeed created two times. This totally makes sense. This can happen due to many reasons for example if the user is careless and executes the same code two times as well, when he attempts to create index without checking if there was index already on the object. However when I paid attention to the details of the error, I realize that error message also talks about statistics along with the index. I got curious if the same would happen if I attempt to create indexes with the same name as statistics already created. There are a few other questions also prompted in my mind. I decided to do a small demonstration of the subject and build following demonstration script. The goal of my experiment is to find out the relation between statistics and the index. Statistics is one of the important input parameter for the optimizer during query optimization process. If the query is nontrivial then only optimizer uses statistics to perform a cost based optimization to select a plan. For accuracy and further learning I suggest to read MSDN. Now let’s find out the relationship between index and statistics. We will do the experiment in two parts. i) Creating Index ii) Creating Statistics We will be using the following T-SQL script for our example. IF (OBJECT_ID('Table1') IS NOT NULL) DROP TABLE Table1 GO CREATE TABLE Table1 (Col1 INT NOT NULL, Col2 VARCHAR(20) NOT NULL) GO We will be using following two queries to check if there are any index or statistics on our sample table Table1. -- Details of Index SELECT OBJECT_NAME(OBJECT_ID) AS TableName, Name AS IndexName, type_desc FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'table1' GO -- Details of Statistics SELECT OBJECT_NAME(OBJECT_ID) TableName, Name AS StatisticsName FROM sys.stats WHERE OBJECT_NAME(OBJECT_ID) = 'table1' GO When I ran above two scripts on the table right after it was created it did not give us any result which was expected. Now let us begin our test. 1) Create an index on the table Create following index on the table. CREATE NONCLUSTERED INDEX Ix_Table1_1 ON Table1(Col1) GO Now let us use above two scripts and see their results. We can see that when we created index at the same time it created statistics also with the same name. Before continuing to next set of demo – drop the table using following script and re-create the table using a script provided at the beginning of the table. DROP TABLE table1 GO 2) Create a statistic on the table Create following statistics on the table. CREATE STATISTICS Ix_table1_1 ON Table1 (Col1) GO Now let us use above two scripts and see their results. We can see that when we created statistics Index is not created. The behavior of this experiment is different from the earlier experiment. Clean up the table setup using the following script: DROP TABLE table1 GO Above two experiments teach us very valuable lesson that when we create indexes, SQL Server generates the index and statistics (with the same name as the index name) together. Now due to the reason if we have already had statistics with the same name but not the index, it is quite possible that we will face the error to create the index even though there is no index with the same name. A Quick Check To validate that if we create statistics first and then index after that with the same name, it will throw an error let us run following script in SSMS. Make sure to drop the table and clean up our sample table at the end of the experiment. -- Create sample table CREATE TABLE TestTable (Col1 INT NOT NULL, Col2 VARCHAR(20) NOT NULL) GO -- Create Statistics CREATE STATISTICS IX_TestTable_1 ON TestTable (Col1) GO -- Create Index CREATE NONCLUSTERED INDEX IX_TestTable_1 ON TestTable(Col1) GO -- Check error /*Msg 1913, Level 16, State 1, Line 2 The operation failed because an index or statistics with name 'IX_TestTable_1' already exists on table 'TestTable'. */ -- Clean up DROP TABLE TestTable GO While creating index it will throw the following error as statistics with the same name is already created. In simple words – when we create index the name of the index should be different from any of the existing indexes and statistics. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Stretch VMWare Player guest OS to fullscreen

    - by Synetech
    I’m using VMWare Player to play an old 16-bit Windows game. Unfortunately the game uses only 640x480 and I cannot figure out how to stretch the VM window to full-screen on the host. I set the guest OS to 640x480, but the screen is still small, in the middle of the screen as seen in figure 1. I even tried setting the compatibility mode to Windows 95 and 640x480, but it has no effect (figure 2) and looks exactly the same as when I set the VM to full-screen (1366x768 on the laptop) and start the game normally. There are few references to stretching a VM. One page mentions setting a Stretch Guest option, but there is no such option, at least not in VMWare Player 4.0.3. I know that VirtualBox has a stretching option, but I’m trying to find a solution for VMWare (Player, not Workstation). Figure 1: Guest OS is pillar-boxed Figure 2: Using compatibility mode

    Read the article

  • Can't ping host from vmware guest using bridged networking

    - by user199421
    Host is Windows 7 Guest is Ubuntu 11.04 Network adapter is wireless I can ping other computers on the network but not the host. No firewall are involved. Sniffing the traffic with wireshark it looks like both the host and the guest are using the same MAC address. My guest simply doesn't receive a reply when asking for 192.168.1.101 (the host) My router has no problem giving both of them different IP addresses but maybe duplicate MAC address is the problem? It seems logical that both will have the same MAC address (from the host point of view) but it strange that there is no work around for this because otherwise I don't see how the host and guess are supposed to communicate.

    Read the article

  • win7 amd64 guest in kvm does not have sound

    - by davidshen84
    hi, my host system is gentoo amd64, guest system is win 7 amd64. the guest system can work, except it does not have sound. i start kvm with -soundhw ac97, QEMU_AUDIO_DRV='alsa', and after i get into the guest system, i can see a 'Multimedia Audio Controller' in the device manager. but win7 cannot find the driver for it. i searched the network for a long time, and i cannot find a driver for intel ac97 for win7 amd64. i also tried -soundhw sb16, es1370, none of them work. please help me fix this.

    Read the article

  • VMware Server Guest OS Random Shutdown

    - by radioactive21
    Anyone encounter a problem in which the Guest OS in VMware Server randomly shuts down? This has happened about once a month, where VMware Server just shuts down ALL guests OS. The host machine is fine, and you can log into the web management of VMware Server, it's just all guest OS get shutdown with no apparent reason. Even the logs say vmware guest os so and so was shut down, but no details why. This is not just one one machine, we have VMware server installed on three different PowerEdge servers and they all have encounter the same problem. We've tripped checked all settings and nothing is out of the ordinary.

    Read the article

  • KVM/Libvirt bridged/routed networking not working on newer guest kernels

    - by SharkWipf
    I have a dedicated server running Debian 6, with Libvirt (0.9.11.3) and Qemu-KVM (qemu-kvm-1.0+dfsg-11, Debian). I am having a problem getting bridged/routed networking to work in KVM guests with newer kernels (2.6.38). NATted networking works fine though. Older kernels work perfectly fine as well. The host kernel is at version 3.2.0-2-amd64, the problem was also there on an older host kernel. The contents of the host's /etc/network/interfaces (ip removed): # Loopback device: auto lo iface lo inet loopback # bridge auto br0 iface br0 inet static address 176.9.xx.xx broadcast 176.9.xx.xx netmask 255.255.255.224 gateway 176.9.xx.xx pointopoint 176.9.xx.xx bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 up route add -host 176.9.xx.xx dev br0 # VM IP post-up mii-tool -F 100baseTx-FD br0 # default route to access subnet up route add -net 176.9.xx.xx netmask 255.255.255.224 gw 176.9.xx.xx br0 The output of ifconfig -a on the host: br0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 inet addr:176.9.xx.xx Bcast:176.9.xx.xx Mask:255.255.255.224 inet6 addr: fe80::5604:a6ff:fe8a:6613/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20216729 errors:0 dropped:0 overruns:0 frame:0 TX packets:19962220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14144528601 (13.1 GiB) TX bytes:7990702656 (7.4 GiB) eth0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26991788 errors:0 dropped:12066 overruns:0 frame:0 TX packets:19737261 errors:270082 dropped:0 overruns:0 carrier:270082 collisions:1686317 txqueuelen:1000 RX bytes:15459970915 (14.3 GiB) TX bytes:6661808415 (6.2 GiB) Interrupt:17 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6240133 errors:0 dropped:0 overruns:0 frame:0 TX packets:6240133 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6081956230 (5.6 GiB) TX bytes:6081956230 (5.6 GiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:79:e4:5a inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:225016 errors:0 dropped:0 overruns:0 frame:0 TX packets:412958 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16284276 (15.5 MiB) TX bytes:687827984 (655.9 MiB) virbr0-nic Link encap:Ethernet HWaddr 52:54:00:79:e4:5a BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vnet0 Link encap:Ethernet HWaddr fe:54:00:93:4e:68 inet6 addr: fe80::fc54:ff:fe93:4e68/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:607670 errors:0 dropped:0 overruns:0 frame:0 TX packets:5932089 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:83574773 (79.7 MiB) TX bytes:1092482370 (1.0 GiB) vnet1 Link encap:Ethernet HWaddr fe:54:00:ed:6a:43 inet6 addr: fe80::fc54:ff:feed:6a43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:922132 errors:0 dropped:0 overruns:0 frame:0 TX packets:6342375 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:251091242 (239.4 MiB) TX bytes:1629079567 (1.5 GiB) vnet2 Link encap:Ethernet HWaddr fe:54:00:0d:cb:3d inet6 addr: fe80::fc54:ff:fe0d:cb3d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9461 errors:0 dropped:0 overruns:0 frame:0 TX packets:665189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4990275 (4.7 MiB) TX bytes:49229647 (46.9 MiB) vnet3 Link encap:Ethernet HWaddr fe:54:cd:83:eb:aa inet6 addr: fe80::fc54:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1649 errors:0 dropped:0 overruns:0 frame:0 TX packets:12177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:77233 (75.4 KiB) TX bytes:2127934 (2.0 MiB) The guest's /etc/network/interfaces, in this case running Ubuntu 12.04 (ip removed): # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 176.9.xx.xx netmask 255.255.255.248 gateway 176.9.xx.xx # Host IP pointopoint 176.9.xx.xx # Host IP dns-nameservers 8.8.8.8 8.8.4.4 The output of ifconfig -a on the guest: eth0 Link encap:Ethernet HWaddr 52:54:cd:83:eb:aa inet addr:176.9.xx.xx Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::5054:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14190 errors:0 dropped:0 overruns:0 frame:0 TX packets:1768 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2614642 (2.6 MB) TX bytes:82700 (82.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:954 errors:0 dropped:0 overruns:0 frame:0 TX packets:954 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:176679 (176.6 KB) TX bytes:176679 (176.6 KB) Output of ping -c4 on the guest: PING google.nl (173.194.35.151) 56(84) bytes of data. 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=1 ttl=55 time=14.7 ms From static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx): icmp_seq=2 Redirect Host(New nexthop: static.161.82.9.176.clients.your-server.de (176.9.82.161)) 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=2 ttl=55 time=15.1 ms From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=3 Destination Host Unreachable From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=4 Destination Host Unreachable --- google.nl ping statistics --- 4 packets transmitted, 2 received, +2 errors, 50% packet loss, time 3002ms rtt min/avg/max/mdev = 14.797/14.983/15.170/0.223 ms, pipe 2 The static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx) is the host's IP. I have encountered this problem with every guest OS I've tried, that being Fedora, Ubuntu (server/desktop) and Debian with an upgraded kernel. I've also tried compiling the guest kernel myself, to no avail. I have no problem with recompiling a kernel, though the host cannot afford any downtime. Any ideas on this problem are very welcome. EDIT: I can ping the host from inside the guest.

    Read the article

  • ICS as guest in Ubuntu 12.04 host

    - by oshirowanen
    I have installed Android as per this guide as a guess os via VirtualBox: http://www.android-x86.org/documents/virtualboxhowto Using the following ISO: android-x86-4.0-RC1-eeepc.iso But I am unable to connect to the internet from within the android virtual machine. The host OS is Ubuntu 12.04 where the internet works fine. I have internet access via a usb wireless connection to the home router. All this is fine. If I install Ubuntu 12.04 as a guest, where the host is also Ubuntu 12.04. The guest os'es internet works fine out of the box. But for some reason, I can't get the above androids internet to work out of the box as the guest os. Anyone know what I am doing wrong?

    Read the article

  • How to ping virtualbox guest machine by hostname? [migrated]

    - by Punit Soni
    Here is the summary. VirtualBox 4.2.18 Host OS: Windows 7 Guest OS: Ubuntu 12.04 Networking: Bridged Adaptor I can ping Host and other machines in host's network from ubuntu guest using hostnames. But, I can only ping the guest machine using IP address from host and network machines. I have avahi-daemon running on Guest OS. I want to be able to ping/ssh the guest machine from host and other machines on network using hostname of the guest machine. Please help.

    Read the article

  • Can't get bridged networking to work between linux guest virtual machine and Mac host

    - by tgoneil
    I'm trying to establish bridged networking from linux mint vsn 12 in virtualbox to a Mac Lion host. Mac config: Network setting: en3 configured by DHCP Sharing setting: Internet Sharing selected, Share connection from en3 to computers using en3 Virtualbox Linux setting: Network setting: Bridged Adapter, Name: en3 I can ping from host (192.168.2.1) to guest (192.168.2.2) and guest to host, but I Cannot ping from the Linux guest to the outside world. Connection in host is up, because I can ping from the Mac host to the outside world. Something else that's seems weird to me, in the Mac Network setting, the IP Address generated by DHCP says 169.254.243.185. What the heck is that?? When I open a terminal up in the Mac, however, ifconfig shows its en3 inet address as 192.168.2.1.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >