Search Results

Search found 13817 results on 553 pages for 'net4 full'.

Page 473/553 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • Capgemini Global Business Process Management Report

    - by JuergenKress
    Welcome to the Capgemini Global Business Process Management (BPM) Report. This report is an exploration of key trends in BPM as seen by CXOs across a broad selection of sectors and geographies. BPM is perhaps at a tipping point - it’s certainly at an exciting stage in its evolution. As both an engineer and an Operational Research practitioner in my early career, and subsequently as a consultant, I have seen BPM through its development over the last 26 years. BPM has its roots in management practices such as Total Quality Management, Business Process Reengineering & Model Based Development; but the advent of the new generation of sophisticated modelling and process execution technologies has greatly enhanced BPM’s power to truly transform businesses. This has created one of the most rapidly growing and attractive market sectors for both services and technology. We see BPM as a critical management discipline that when executed against clear, cross organizational business objectives, can deliver exceptional value to that organization. However, we also see that the potential for BPM is not well understood. Our decision to conduct this global survey stemmed from discussions with our clients. We sought to gain a better impression of their understanding of BPM, how they measure its value, and how far it is prioritized within their Business and Technology Transformation efforts. This research confirms our belief that BPM needs to be a jointly owned Business and IT discipline. It also demonstrates that it is starting to gain significant traction in the market and investments are starting to pay dividends to the early adopters. At Capgemini we are being asked by our clients to help them simplify and improve their business models and the technology that supports them and we are already seeing BPM become an integral and key part of this proposition. Business Process Management is becoming ever more relevant to both large and small organizations in the current economic climate. At a time when many different market sectors are facing slow revenue growth, customer churn and increased pressures on costs, BPM becomes a critical weapon in the battle for efficiency and effectiveness in processes. Furthermore, in a challenging and changing business environment that is characterized by uncertainty, it allows organizations to adapt, be more agile and fleet of foot. Capgemini is seeing strong demand for BPM services in markets such as the USA, the UK, the Netherlands and France; and there are clear signs of increased interest in other geographies such as, Germany, Sweden, Spain, Italy and Australia. In sector terms, the financial services industry has led the way in BPM adoption over the recent past, driven by increased focus on customer- centricity and regulatory compliance. Other sectors, public sector, utilities, telco, retail and manufacturing are now not only catching up, but are starting o use BPM in new ways to create new business models to serve customers and outsmart the competition. The research findings also show however that this is a complex landscape, and we are not seeing adoption of BPM in a clear and consistent way. This report also looks at some of the barriers to adoption, with organizational silos being a major obstacle. Waters are further muddied by fragmented budgets, lack of clear governance and ownership and internal politics. The objective of our investment in this research project was to shed some light on these elements with a view to assisting organizations to create strategies that avoid or at least mitigate some of these barriers to success. Management of change in such endea vours is a key part in enabling the appropriate alignment of business and technology to support their transformation efforts. I hope that you find this report of benefit in the further adoption of Business Process Management. Get the full report here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,bpm report,bpm market,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • Seeking advice on tools and technology for my new game [closed]

    - by k.k. slider
    I'm a C# developer who has been programming a game in my spare time using XNA and Visual Studio. The game's logic is mostly done and I've completed a prototype that has most of the functionality of (what I envision to be) the final game. However, having heard about the uncertain future and (possibly) limited audience for XNA games, I'm looking to switch platforms... but I don't know what technology would best suit my needs. Below are some specifics about my game and what exactly I'm looking for, if you're interested: The game is a 2D turn-based tactical RPG (strategy game) for two players. It is a basic sprite and tile based game with animations and sound. 3D capabilities are not necessary. I'd like to allow players to compete with others online, and have a basic ranking/matchmaking system. I will probably need something that can interact with a server and a database (the game is turn-based and has no RNG, so cheating would be easy to detect even if most computation is done client-side and minimal data is sent to the server). Ideally, I would be able to release an early version of the game and have people give feedback as I develop additional features (similar to Minecraft). I'd prefer to have a way to release periodic updates to the game instead of releasing an absolute final product. To reach the widest possible audience, I'd prefer technology that allows me to release on PC, Android, iOS, and (maybe) Mac. This is a game with simple mouse inputs which can fit on a mobile touch screen. The game should be monetizable. If I find success with this game, then I may consider becoming a full-time indie game developer. I have several other game ideas and have learned quite a bit from my first attempt at game development. My first thought was an F2P/microtransaction model, but I'm open to other suggestions. Language isn't a primary concern of mine, since I have a decent amount of experience using several languages to program large projects. I'm willing to spend money (e.g. on a developer's license), but the more expensive it gets, the more hesitant I am to use it. I've looked into the following solutions... there are a LOT of tools out there... if anyone has experience with any of these and would like to recommend/reject any of them, it would be helpful. C#/.NET (XNA/MonoGame/SDL/SlimDX/Xamarin/ExEn/ANX?) HTML5/JS (AppMobi/PhoneGap/Marmalade/FlashCanvas/Cordova/libRocket?) Python (Pyglet/Pygame/Kivy?) Java (JavaFX/libGDX?) Unity/Construct 2/Cocos2D/NME/Corona/other game creation software? I'd like something that can do 2D and isn't limited by being too high-level. Other languages (Lua/LOVE? Moai?) Thanks for answering this rather long and tedious question...

    Read the article

  • My LAN USB NIC is not working in ubuntu 11.10?

    - by Gaurav_Java
    Today i start my system its seems that my LAN port is not working . so i buy one USB to LAN adapter and i plug in ubuntu system its doen't connect automatically .when i check result lsusb its shows me that there is one DM9601 Ethernet adapter is connected when i click on network information in panel its shows me that there is something " wired NEtwork (Broadcom NetLink BCM5784M gigabit Ethernet PCIe) I think want some driver for that .i don't have any idea how it can be used ? here output of sudo lspci -nn *00:00.0 Host bridge [0600]: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub [8086:2a40] (rev 07) 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07) 00:02.1 Display controller [0380]: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a43] (rev 07) 00:1a.0 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 [8086:2937] (rev 03) 00:1a.1 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 [8086:2938] (rev 03) 00:1a.7 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 [8086:293c] (rev 03) 00:1b.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03) 00:1c.0 PCI bridge [0604]: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 [8086:2940] (rev 03) 00:1c.1 PCI bridge [0604]: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 [8086:2942] (rev 03) 00:1c.2 PCI bridge [0604]: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 [8086:2944] (rev 03) 00:1c.4 PCI bridge [0604]: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 [8086:2948] (rev 03) 00:1d.0 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 [8086:2934] (rev 03) 00:1d.1 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 [8086:2935] (rev 03) 00:1d.2 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 [8086:2936] (rev 03) 00:1d.3 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 [8086:2939] (rev 03) 00:1d.7 USB Controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 [8086:293a] (rev 03) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev 93) 00:1f.0 ISA bridge [0601]: Intel Corporation ICH9M LPC Interface Controller [8086:2919] (rev 03) 00:1f.2 SATA controller [0106]: Intel Corporation ICH9M/M-E SATA AHCI Controller [8086:2929] (rev 03) 00:1f.3 SMBus [0c05]: Intel Corporation 82801I (ICH9 Family) SMBus Controller [8086:2930] (rev 03) 02:00.0 Ethernet controller [0200]: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe [14e4:1698] (rev 10) 04:00.0 Network controller [0280]: Intel Corporation WiFi Link 5100 [8086:4232]* sudo lshw -class network *-network description: Ethernet interface product: NetLink BCM5784M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 00:1f:16:9a:56:98 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=sb v2.19 latency=0 link=no multicast=yes port=twisted pair resources: irq:47 memory:f4500000-f450ffff *-network DISABLED description: Wireless interface product: WiFi Link 5100 vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 00 serial: 00:22:fa:09:02:00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-17-generic firmware=8.83.5.1 build 33692 latency=0 link=no multicast=yes wireless=IEEE 802.11abgn resources: irq:46 memory:f4600000-f4601fff *-network description: Ethernet interface physical id: 4 logical name: eth1 serial: 00:60:6e:00:f1:7d size: 100Mbit/s capacity: 100Mbit/s capabilities: ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=dm9601 driverversion=22-Aug-2005 duplex=full firmware=Davicom DM9601 USB Ethernet ip=192.168.1.34 link=yes multicast=yes port=MII speed=100Mbit/s I am using Wimax internet connection which i have to connect from browser . at that time my system is not showing that i am connected to any wired connection. but when i connect internet from other system after getting conneted to internet . when i plug again my USB LAN then its shows that you are conneted to wired connetion. here is screenshot for conneting wimax from browser after connecting to internet network connection shows

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Cleaner ClientID's with ASP.NET 4.0

    - by amaniar
    Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A common complain we have had when using ASP.NET web forms is the inability to control the ID attributes being rendered in the HTML markup when using server controls. Our Interface Engineers want to be able to predict the ID’s of controls thereby having more control over their client side code for selecting/manipulating elements by ID or using CSS to target them. While playing with the just released VS2010 and .NET 4.0 I discovered some real cool improvements. One of them is the ability to now have full control over the ID being rendered for server controls. ASP.NET 4.0 controls now have a new ClientIDMode property which gives the developer complete control over the ID’s being rendered making it easy to write JavaScript and CSS against the rendered html. By default the ClientIDMode is set to Predictable which results in clean and predictable ID’s by concatenating the ID’s of the Parent and child controls. So the following markup: <asp:Content ID="ParentContainer" ContentPlaceHolderID="MainContentPlaceHolder" runat="server">     <asp:Label runat="server" ID="MyLabel">My Label</asp:Label> </asp:Content>                                                                                                                                                             Will render:   <span id="ParentContainer_MyLabel">My Label</span> Instead of something like this: (current) <span id="ct100_ParentContainer_MyLabel">My Label</span> Other modes include AutoID (renders ID’s like it currently does in .NET 3.5), Static (renders the ID exactly as specified in the code) and Inherit (defers the mode to the parent control). So now I can write my jQuery selector as: $(“ParentContainer_MyLabel”).text(“My new Text”); Instead of: $(‘<%=this. MyLabel.ClientID%>’).text(“My new Text”); Scott Mitchell has a great article about this new feature: http://bit.ly/ailEJ2 Am excited about this and some other improvements. Many thanks to the ASP.NET team for Listening!

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • TechEd 2010 Important Events

    If youll be attending TechEd in New Orleans in a couple of weeks, make sure the following are all on your calendar:   Party with Palermo TechEd 2010 Edition Sunday 6 June 2010 7:30-930pm Central Time RSVP and see who else is coming here.  The party takes place from 730pm to 930pm Central (Local) Time,  and includes a full meal, free swag, and prizes.  The event is being held at Jimmy Buffetts Margaritaville located at 1104 Decatur Street.   Developer Practices Session: DPR304 FAIL: Anti-Patterns and Worst Practices Monday 7 June 2010 4:30pm-545pm Central Time Room 276 Come to my session and hear about what NOT to do on your software project.  Hear my own and others war stories and lessons learned.  Youll laugh, youll cry, youll realize youre a much better developer than a lot of folks out there.  Heres the official description: Everybody likes to talk about best practices, tips, and tricks, but often it is by analyzing failures that we learn from our own and others' mistakes. In this session, Steve describes various anti-patterns and worst practices in software development that he has encountered in his own experience or learned about from other experts in the field, along with advice on recognizing and avoiding them. View DPR304 in TechEd Session Catalog >> Exhibition Hall Reception Monday 7 June 2010 545pm-9pm Immediately following my session, come meet the shows exhibitors, win prizes, and enjoy plenty of food and drink.  Always a good time.   Party: Geekfest Tuesday 8 June 8pm-11pm Central Time, Pat OBriens Lets face it, going to a technical conference is good for your career but its not a whole lot of fun. You need an outlet. You need to have fun. Cheap beer and lousy pizza (with a New Orleans twist) We are bringing back GeekFest! Join us at Pat OBriens for a night of gumbo, beer and hurricanes. There are limited invitations available, so what are you waiting for? If you are attending the TechEd 2010 conference and you are a developer, you are invited. To register pick up your "duck" ticket (and wristband) in the TechEd Technical Learning Center (TLC) at the Developer Tools & Languages (DEV) information desk. You must have wristband to get in. Tuesday, June 8th from 8pm 11pm Pat OBriens New Orleans 624 Bourbon Street New Orleans, LA 70130 Closing Party at Mardi Gras World Thursday 10 June 730pm-10pm Central Time Join us for the Closing Party and enjoy great food, beverages, and the excitement of New Orleans at Mardi Gras World. The colors, the lights, the music, the joie de vivreits all here.  Learn more >> Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle ZS3 Contest for Partners: Share an unforgettable experience at the Teatro Alla Scala in Milan

    - by Claudia Caramelli-Oracle
    12.00 Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation  Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA  The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Oracle shall be the final arbiter in selecting the winners and all winners will be notified via their Oracle account manager.Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Remember: access to the community workspace requires membership. If you are not a member please register here. Good Luck!! For more information, please contact Sasan Moaveni. (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above (2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Perfect End to a Bad Day

    - by TehGrumpyCoder
    Yesterday's post about A Bad Day at Work actually had an addendum to it. There were apparently a bunch of guys on ice skates last night competing in some sport way the hell and gone over on the other side of the valley, and enough people couldn't live without seeing them that they had all major arteries heading west honked. I mean honked... the traffic guy reported the 101 had 16 miles of backup... yikes. Since I worked downtown for a number of years, my fallback is to cut across the city on surface streets to get to one of my old 'haunts' and just drive it home from there. Of course with the 101 backed up, then I17 would logically be as well, so I kept the news on rather than my Zune and heard where the bad stuff was going North. I popped out on the freeway about 7 miles south of my exit. Got to the exit which is about a mile from the house without killing or maiming me or anyone else. Waited patiently at the light in the inside lane to make a left and go under the freeway proceeding West. The light changed, I had full green, I started through and whoa... I've got someone in a little rat car crossing my bow! A little explanation... I drive a 3/4 ton pickup with a V-10, extended cab and shell on the back. It's not jacked up, but it sits up pretty good and is longer than any parking place I've ever tried to put it into. I consider this truck to be the consolation prize for paying uninsured motorist coverage for 45 years and having Pilar Martinez totally destroy a 3/4 ton Silverado on March 1, 2007 by plowing into me at traffic speed while I was stopped at a light. If you pay for uninsured motorist coverage, ask your insurance agent *exactly* what that means... I bet it's different than what you think it means. But I digress, sorry... So here I am with a car that is shorter from top to road than the hood on my truck, and the driver thought it would be safe to run a red light and see if they could get past me before I got into the lane. The right side of my front bumper was almost into the driver's window when I hit the brakes and wheeled it left. Fortunately for all involved, I saw it soon enough, and pulled into the 2nd lane for making a left to go back South. I looked in my mirror, signalled a move, then moved over behind the yuck in the rat car. I then punched it, and the future hood ornament and I both made it through the next light. I pulled alongside to let her know that she was DEFINITELY Number 1 in my book, and it's a middle-age woman looking at me with a "sorry, it was an accident" show of pouty face and arms held up. Tough $hit lady... that may have worked when you were 18, but it's not working anymore, and it wasn't an accident... you ran a freakin' red light and almost got yourself killed. That just about put a bow on the day... I was home later than usual, pissed off about work stuff, pissed off at traffic, and now that. I ate dinner, watched a little TV, and was asleep about 9:30 exhausted. Hope today is better.

    Read the article

  • Experimenting with other search engines

    - by Bill Graziano
    I’ve been a Google user so long I can hardly remember what I used before it.  Alta Vista maybe?  Or Yahoo.  I’ve tried Bing off and on but it never really stuck.  I probably care more about search engines than your average user because of their impact on SQLTeam.com.  Lately I’ve been trying two other search engines and actually switched to one of them. I’ve played with Blekko a little in the past.  They have some interesting ways to “slice up” your results.  For example, searching on “SQL Server /blogs /date” should just search all the recently updated blogs.  Those two extra words on the search are slashtags.  The full list of slashtags runs from /forums to just see forums to /twitter to /nikon to /reviews and on and on and on.  I laughed when I saw they had slashtags for both liberal and conservative.  I’d hate to find any search results that don’t match my existing worldview :)  You can also create your own slashtags.  I created a mini-search engine for the SQL Server blogs that I read.  You can search it for “backup” at http://blekko.com/ws/backup+/billgraziano/sql-sites.  I uploaded my OPML and it limited the search to just those sites.  It seems like the site is focusing more on curating results and less on algorithms.  This is an interesting site for those power searchers.  There are some great ways to curate results using slashtags.  For 99% of my searches (type words, click on one of the first few links) slashtags are overkill.  They do have some good information on page and site ranking though so I’ll probably send some time looking through that. Blekko recently got my attention again when they said they were banning “content farms” - and that includes eHow and experts-exchange.  I always feel used when I click on a link to EE and find myself scrolling all the way to the bottom to see if I can find the answer.  Sometimes it’s there but sometimes it tells me I need to pay first.  I’ve longed for a way to always exclude certain sites.  Blekko might be taking a hammer to a problem that needs a scalpel but it’s an interesting choice.  (And some of the comments in the TechCrunch link are interesting if you’re a search nerd.) DuckDuckGo is an odd name for a search engine.  Their big hook is that they don’t have search history.  If you wade through your Google account you can probably find the page where it stores your search history.  It was pretty enlightening to find mine.  It was easy to disable but that got me started looking at other search engines.  DDG (or DukGo) just feels like Google used to in the old days.  The results are good enough and the site is fast. Searches will return a snippet from WikiPedia or other site (like StackOverflow) at the top.  I think the idea is to answer the question without needing to visit the site.  I’m not sure that’s a good thing for SQLTeam.com. The only thing I really miss is image search.  You can add a “!i” at the end of any search and it will search the images on Bing.  Bing doesn’t have a great image search but it works for most of what I need.  They call these exclamation marks “!bangs” and they are kinda, sorta like slashtags.  I’ve been using DuckDuckGo now for a few weeks and I’m pretty happy with it.  I use Chrome for my browser and it was an easy switch to make.  It’s still a little surprising seeing my search results come up in a different format.  I’m starting to get used to it though.

    Read the article

  • trying to setup wireless

    - by JohnMerlino
    I'm trying to set up wireless on vostro 1520 dell laptop, with latest Ubuntu install. Here's the output of some of the commands that I was told to run: lshw -C network viggy@ubuntu:~$ lshw -C network WARNING: you should run this program as super-user. *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 03 serial: 00:24:e8:da:84:25 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.2.6 latency=0 multicast=yes port=MII speed=100Mbit/s resources: irq:47 ioport:3000(size=256) memory:f6004000-f6004fff memory:f6000000-f6003fff memory:f6020000-f603ffff *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0e:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:fa000000-fa003fff *-network DISABLED description: Wireless interface physical id: 1 logical name: wlan0 serial: 0c:60:76:05:ee:74 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.2.0-29-generic firmware=N/A multicast=yes wireless=IEEE 802.11bg lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 0e:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 1a:00.0 FireWire (IEEE 1394): O2 Micro, Inc. Device 10f7 (rev 01) 1a:00.1 SD Host controller: O2 Micro, Inc. Device 8120 (rev 01) 1a:00.2 Mass storage controller: O2 Micro, Inc. Device 8130 (rev 01) iwconfig lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extensions. At this point in time, I don't have wireless.

    Read the article

  • Perspective Is Everything

    - by juanlarios
    Sitting on a window seat on my way back from Seattle I looked out the window and saw the large body of water. I was reminded of childhood memories of running as hard as I could through burning hot sand with the anticipation of the splash of the ocean. Looking out the window the water appeared like a sheet draped over land. I couldn’t help but ponder how perspective changes everything.  Over the last several days I had a chance to attend the MVP Summit in Redmond. I had a great time with fellow MVP’s and the SharePoint Product Group. Although I can’t say much about what was discussed and what is coming in the future, I want to share some realizations I had while experiencing the MVP summit.  The SharePoint Product is ever-improving, full of innovation but also a reactionary embodiment of MVP, client and market feedback. There are several features that come to mind that clients complain about where I have felt helpless in informing them that the features are not as mature as they would like it. Together, we figure out a way to make it work and deal with the limitations. It became clear that there are features that have taken a different purpose in the market place from the original vision. The SP Product group is working hard to react to these changes in vision and make SharePoint better for real life implementations.  It is easy to think that SharePoint should be all things to all people. In reality there are products that are very detailed in specific composites, they do this one thing well but severely lack in other areas.  Its easy sometimes to say, “What was Microsoft thinking with this feature?” the Product group is doing all they can to make the moving pieces better and dealing with challenges with having all of them work together.  Sometimes the features don’t fully embody the vision because of the many challenges, but trust me when I say the product group is really focused on delivery and innovation.  As I was speaking with a fellow MVP throughout the session, we spoke about the iPad 2(ironically announced this past week during the MVP summit) and Microsoft’s possible product answer; I realized the days of reactionary products from MS is over. There are many users that will remember Vista and the painful execution in that product, but there has been a lot of success in Windows 7. There was no rush for a reactionary answer to the Nintendo Wii, as a result a ground breaking and game changing product was brought to market, the XBOX –Kinect! I can’t say much here, but it’s safe to say, expect innovation, and execution of products and technology that will change the market instead of react to them!       There are many things I learned and I would love to share that have to do with perspective, technology, etc… but this is far as I can go in details. This might not be new to you or specifically the message that was shared during the summit. These are just my impressions of the event and the spirit of future vision. Great things ahead!

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • ADF Mobile - Update through Web Service (with ADF Business Components)

    - by Shay Shmeltzer
    In my previous blog entry I went over the basics of exposing ADF Business Components through service interfaces, and developing a simple ADF Mobile application that access and fetches data from those services. In this entry we'll dive a bit deeper  and address an update scenario through these web service interfaces. You can see the full demo video at the end of the post. In the first steps I show how to add an explicit method execution to fetch a specific record we want to update on the second page of a flow. For an update you'll be invoking a service method and passing the record you want to update as a parameter. As in many other Web services scenarios, we need to provide a complete object of specific type to the method. The ADF Web service data control helps you here by offering an object of this type that you can drag and drop into your page. The next step is to make sure to fill that object with the values you want to update. In the demo we do this through  coding in a backing bean that shows how to use the AdfmfJavaUtilities utility. The code gets the value from one field, gets a pointer to the parallel update field, and then copy from one to the other. At the end of the bean we manually execute the call to the update method on the Web service. Here is the demo: &amp;amp;amp;amp;&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; Here is the code used in the backing bean in the demo above. package a.mobile;import oracle.adfmf.amx.event.ActionEvent;import javax.el.MethodExpression;import javax.el.ValueExpression;import oracle.adfmf.amx.event.ActionEvent;import oracle.adfmf.framework.api.AdfmfJavaUtilities;import oracle.adfmf.framework.model.AdfELContext;public class backing {    public backing() {    }    public void copyAndUpdate(ActionEvent actionEvent) {        // Add event code here...        AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext();        ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName.inputValue}", String.class);        ValueExpression ve3 =            AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName1.inputValue}", String.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        MethodExpression me = AdfmfJavaUtilities.getMethodExpression("#{bindings.updateDepartmentsView1.execute}", Object.class, new Class[] {});         me.invoke(adfELContext, new Object[] {});        }    }

    Read the article

  • std::vector::size with glDrawElements crashes?

    - by NoobScratcher
    ( win32 / OpenGL 3.3 / GLSL 330 ) I decided after a long time of trying to do a graphical user interface using just opengl graphics to go back to a gui toolkit and so in the process have had to port alot of my code to win32. But I have a problem with my glDrawElement function. my program compiles and runs fine until it gets to glDrawElements then crashes.. which is rather annoying right. so I was trying to figure out why and I found out its std::vector::size member not returning the correct amount of faces in the unsigned interger vector eg, "vector<unsigned int>faces; " so when I use cout << faces.size() << endl; I got 68 elements???? instead of 24 as you can see here in this .obj file: # Blender v2.61 (sub 0) OBJ File: '' # www.blender.org v 1.000000 -1.000000 -1.000000 v 1.000000 -1.000000 1.000000 v -1.000000 -1.000000 1.000000 v -1.000000 -1.000000 -1.000000 v 1.000000 1.000000 -0.999999 v 0.999999 1.000000 1.000001 v -1.000000 1.000000 1.000000 v -1.000000 1.000000 -1.000000 s off f 1 2 3 4 f 5 8 7 6 f 1 5 6 2 f 2 6 7 3 <--- 24 Faces not 68? f 3 7 8 4 f 5 1 4 8 I'm using a parser I created to get the faces/vertexes in my .obj file: char modelbuffer [20000]; int MAX_BUFF = 20000; unsigned int face[3]; FILE * pfile; pfile = fopen(szFileName, "rw"); while(fgets(modelbuffer, MAX_BUFF, pfile) != NULL) { if('v') { Point p; sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << " p.x = " << p.x << " p.y = " << p.y << " p.z = " << p.x << endl; } if('f') { sscanf(modelbuffer, "f %d %d %d %d", &face[0], &face[1], &face[2], &face[3]); cout << "face[0] = " << face[0] << " face[1] = " << face[1] << " face[2] = " << face[2] << " face[3] = " << face[3] << "\n"; faces.push_back(face[0] - 1); faces.push_back(face[1] - 1); faces.push_back(face[2] - 1); faces.push_back(face[3] - 1); cout << face[0] - 1 << face[1] - 1 << face[2] - 1 << face[3] - 1 << endl; } } using this struct to store the x,y,z positions also this vector was used with Point: vector<Point>points; struct Point { float x, y, z; }; If someone could tell me why its not working and how to fix it that would be awesome I also provide a pastebin to the full source code if you want a closer look. http://pastebin.com/gznYLVw7

    Read the article

  • Inside Red Gate - Exercising Externally

    - by simonc
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem! Cross posted from Simple Talk.

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Is this Hybrid of Interface / Composition kosher?

    - by paul
    I'm working on a project in which I'm considering using a hybrid of interfaces and composition as a single thing. What I mean by this is having a contain*ee* class be used as a front for functionality implemented in a contain*er* class, where the container exposes the containee as a public property. Example (pseudocode): class Visibility(lambda doShow, lambda doHide, lambda isVisible) public method Show() {...} public method Hide() {...} public property IsVisible public event Shown public event Hidden class SomeClassWithVisibility private member visibility = new Visibility(doShow, doHide, isVisible) public property Visibility with get() = visibility private method doShow() {...} private method doHide() {...} private method isVisible() {...} There are three reasons I'm considering this: The language in which I'm working (F#) has some annoyances w.r.t. implementing interfaces the way I need to (unless I'm missing something) and this will help avoid a lot of boilerplate code. The containee classes could really be considered properties of the container class(es); i.e. there seems to be a fairly strong has-a relationship. The containee classes will likely implement code which would have been pretty much the same when implemented in all the container classes, so why not do it once in one place? In the above example, this would include managing and emitting the Shown/Hidden events. Does anyone see any isseus with this Composiface/Intersition method, or know of a better way? EDIT 2012.07.26 - It seems a little background information is warranted: Where I work, we have a bunch of application front-ends that have limited access to system resources -- they need access to these resources to fully function. To remedy this we have a back-end application that can access the needed resources, with which the front-ends can communicate. (There is an API written for the front-ends for accessing back-end functionality as though it were part of the front-end.) The back-end program is out of date and its functionality is incomplete. It has made the transition from company to company a couple of times and we can't even compile it anymore. So I'm trying to rewrite it in my spare time. I'm trying to update things to make a nice(r) interface/API for the front-ends (while allowing for backwards compatibility with older front-ends), hopefully something full of OOPy goodness. The thing is, I don't want to write the front-end API after I've written pretty much the same code in F# for implementing the back-end; so, what I'm planning on doing is applying attributes to classes/methods/properties that I would like to have code for in the API then generate this code from the F# assembly using reflection. The method outlined in this question is a possible alternative I'm considering instead of implementing straight interfaces on the classes in F# because they're kind of a bear: In order to access something of an interface that has been implemented in a class, you have to explicitly cast an instance of that class to the interface type. This would make things painful when getting calls from the front-ends. If you don't want to have to do this, you have to call out all of the interface's methods/properties again in the class, outside of the interface implementation (which is separate from regular class members), and call the implementation's members. This is basically repeating the same code, which is what I'm trying to avoid!

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >