Search Results

Search found 19458 results on 779 pages for 'interface implementation'.

Page 166/779 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • WebLogic Partner Community Newsletter June 2013

    - by JuergenKress
    Dear WebLogic Partner Community member, This month newsletter contains tons of Java material with the availability of Java EE7! Including many articles in the May/June Java Magazine and the Duke Award nomination! Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the WebLogic Server 12c Certified Implementation Specialist  or ADF 11g Certified Implementation Specialist you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a WebLogic and ADF Specialization you can request a nice plaque for your office. Summer time is training time – make sure you attend our Fusion Middleware Summer Camps or one of our upcoming Exalogic Implementation Workshop by PTS in Spain, Turkey and Middle East. For those who can not make it we offers plenty of online courses like the New ADF Academy. Or the webcast The value of Engineered Systems for SAP. At our WebLogic Community Workspace (WebLogic Community membership required) you can find Engineered Systems Strategy presentation. Thanks to the community for all the WebLogic best practice papers and tools like Automated provision of Oracle Weblogic Server Platform & Frank’s Coherence videos on YouTube & Create and deploy applications on the Oracle Java Cloud & WebLogic Application redeployment using shared libraries - without downtime & Oracle Traffic Director. The first tests deployments for WebLogic on Oracle Database Appliance are on the way, thanks to all who are their experience: Sizing & Configuration and Traffic Director. Virtualised Oracle Database Appliance Proof of Concept - #1 Planning from Simon Haslam. Would be great if you can also share your experience via twitter @wlscommunity! In the ADF section of the newsletter you find Tuning Application Module Pools and Connection Pools & List View - Cool Looking ADF PS6 Component for Collections & Insider Essential & User Interface & Skinning & Oracle Forms to ADF Modernization reference. Great summer time! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsJune2013 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • User connection management in Reporting Services configuration

    - by Testas
    IT professionals will use Reporting Services Configuration Manager to perform post installation tasks for SQL Server Reporting Services. Introduced in SQL Server 2005, Reporting Services Configuration Manager provides an intuitive interface to perform tasks including specifying the report server database, report manager url, and indeed one of the first post installation tasks that should be performed is backing up the encryption keys that are used to protect the sensitive information within the rdl files.  Many of the options that are selected within Reporting Services Configuration Manager are written to a number of configuration files including the rsreportserver.config file located in C:\Program Files\Microsoft SQL Server\Report Server InstanceName\Reporting Services\ReportServer folder.When opening this file you will notice that there are more configuration settings within the rsreportserver.config file than is available through the Reporting Services Configuration Manager Interface. As a result there are additional configuration options that can be defined within this file.  A customer was having a problem performing stress tests against a new Report Server that would be going live for an enterprise reporting system. One aspect of the stress test was to fire 50 connections from a single user account. When performing the stress test an error described that the maximum active request had been exceeded. Within the rsreportserver.config, there is a key that is added to the file:  <Add Key=”MaxActiveReqForOneUser” Value=”20”/>  Changing the value from 20 to 50 accommodated the needs of the stress test, however, a wider question should be asked pertaining to this setting when implementing Reporting Services to a production environment. Within an intranet environment, the default setting is appropriate when network bandwidth is high, users are known and demand for reports is particularly high from a group of users.  However, when deploying a Reporting Server solution to an extranet, or the internet, you may want to consider reducing this setting to reduce to scope of connections that can be acquired by a single user and placing unnecessary pressure on the report server. I do hope that Reporting Services Configuration Manager evolves to include an advanced page that includes an intuitive interface to change configuration settings such as the MaxActiveReqForOneUser, and also configure rendering and data extensions and define secure connection levels to the report server. All these options can be configured within the rsreportserver.config file, and these are setting that customers would like to see in Reporting Services Configuration Manager in the future.   If you think that the SQL community would benefit from this addition, you can vote on it at Microsoft Connect  https://connect.microsoft.com/SQLServer/feedback/details/565575/extending-reporting-services-configuration-manager-rscm    

    Read the article

  • Is Java free/open source or not?

    - by user1598390
    On November 13, 2006, Sun released much of Java as free and open source software, (FOSS), under the terms of the GNU General Public License (GPL). On May 8, 2007, Sun finished the process, making all of Java's core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright. OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java programming language. It is the result of an effort Sun Microsystems began in 2006. The implementation is licensed under the GNU General Public License (GNU GPL) with a linking exception. Why there are still people that say Java is not open source or free as in free speech ? Am I missing something? Is Java still privative ?

    Read the article

  • Mocking property sets

    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo { int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false; var foo = Mock.Create<IFoo>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() =>...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Mocking property sets

    - by mehfuzh
    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo {     int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Here , the method for running though our expectation for set is Mock.ArrangeSet , where we can directly set our expectations or can even set matchers into it like: var foo = Mock.Create<IFoo>(BehaviorMode.Strict);   Mock.ArrangeSet(() => foo.Value = Arg.Matches<int>(x => x > 3));   foo.Value = 4; foo.Value = 5;   Assert.Throws<MockException>(() => foo.Value = 3);   In the example, any set for value not satisfying matcher expression will throw an MockException as this is a strict mock but what will be the case for loose mocks, where we also have to assert it. Here, let’s take an interface with an indexed property. Indexers are treated in the same way as properties, as with basic indexers let you access your class if it were an array. public interface IFooIndexed {     string this[int key] { get; set; } } We want to  setup a value for a particular index,  we then will pass that mock to some implementer where it will be actually called. Once done, we want to assert that if it has been invoked properly. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = "ping");   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = "ping"); In the above example, both the values are user defined, it might happen that we want to make it more dynamic, In this example, i set it up for set with any value and finally checked if it is set with the one i am looking for. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = Arg.Any<string>());   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = Arg.Matches<string>(x => string.Compare("ping", x) == 0)); This is more or less of mocking user sets , but we can further have it to throw exception or even do our own task for a particular set , like : Mock.ArrangeSet(() => foo.MyProperty = 10).Throws(new ArgumentException()); Or  bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Or call the original setter , in this example it will throw an NotImplementedExpectation var foo = Mock.Create<FooAbstract>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).CallOriginal(); Assert.Throws<NotImplementedException>(() => { foo.Value = 1; });   Finally, try all these, find issues, post them to forum and make it work for you :-). Hope that helps,

    Read the article

  • Mocking property sets

    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo { int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false; var foo = Mock.Create<IFoo>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() =>...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Composite-like pattern and SRP violation

    - by jimmy_keen
    Recently I've noticed myself implementing pattern similar to the one described below. Starting with interface: public interface IUserProvider { User GetUser(UserData data); } GetUser method's pure job is to somehow return user (that would be an operation speaking in composite terms). There might be many implementations of IUserProvider, which all do the same thing - return user basing on input data. It doesn't really matter, as they are only leaves in composite terms and that's fairly simple. Now, my leaves are used by one own them all composite class, which at the moment follows this implementation: public interface IUserProviderComposite : IUserProvider { void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider); } public class UserProviderComposite : IUserProviderComposite { public User GetUser(SomeUserData data) ... public void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider) ... } Idea behind UserProviderComposite is simple. You register providers, and this class acts as a reusable entry-point. When calling GetUser, it will use whatever registered provider matches predicate for requested user data (if that helps, it stores key-value map of predicates and providers internally). Now, what confuses me is whether RegisterProvider method (brings to mind composite's add operation) should be a part of that class. It kind of expands its responsibilities from providing user to also managing providers collection. As far as my understanding goes, this violates Single Responsibility Principle... or am I wrong here? I thought about extracting register part into separate entity and inject it to the composite. As long as it looks decent on paper (in terms of SRP), it feels bit awkward because: I would be essentially injecting Dictionary (or other key-value map) ...or silly wrapper around it, doing nothing more than adding entires This won't be following composite anymore (as add won't be part of composite) What exactly is the presented pattern called? Composite felt natural to compare it with, but I realize it's not exactly the one however nothing else rings any bells. Which approach would you take - stick with SRP or stick with "composite"/pattern? Or is the design here flawed and given the problem this can be done in a better way?

    Read the article

  • CISCO 2911 Router configuration

    - by bala
    Device cisco 2911 router configuration support is required please. I have exchange server 2010 configured and working without any errors the problem is in cisco router configuration when exchange server sends emails out the receives WAN IP not the public ip. I have configured RDNS lookups with our MX record IP addesses that match the FQDN but all our emails are rejected because it does not match with the public ip. Receiving mails problem is not an problem all mails are coming through. i am sure i am missing something on the router configuration that does not sends the public ip, can any one help me to solve this issue. Note; I've got 1 WAN IP & 8 Public IP from ISP . Find below the running configuration. Building configuration... Current configuration : 2734 bytes ! ! Last configuration change at 06:32:13 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 version 15.1 service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname BSBG-LL ! boot-start-marker boot-end-marker ! ! enable secret 5 $x$xHrxxxxx5ox0 enable password 7 xx23xx5FxxE1xx044 ! no aaa new-model ! no ipv6 cef ip source-route ip cef ! ! ! ! ! ip flow-cache timeout active 1 ip domain name yourdomain.com ip name-server 213.42.20.20 ip name-server 195.229.241.222 multilink bundle-name authenticated ! ! crypto pki token default removal timeout 0 ! ! license udi pid CISCO2911/K9 ! ! username bsbg ! ! ! ! ! ! interface Embedded-Service-Engine0/0 no ip address shutdown ! interface GigabitEthernet0/0 ip address 192.168.0.9 255.255.255.0 ip flow ingress ip nat inside ip virtual-reassembly in duplex auto speed 100 no cdp enable ! interface GigabitEthernet0/1 ip address 213.42.xx.x2 255.255.255.252 ip nat outside ip virtual-reassembly in duplex auto speed auto no cdp enable ! interface GigabitEthernet0/2 no ip address shutdown duplex auto speed auto ! ip forward-protocol nd ! no ip http server no ip http secure-server ! ip nat inside source list 120 interface GigabitEthernet0/1 overload ip nat inside source static tcp 192.168.0.4 25 94.56.89.100 25 extendable ip nat inside source static tcp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static udp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static tcp 192.168.0.4 110 94.56.89.100 110 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.100 443 extendable ip nat inside source static tcp 192.168.0.4 587 94.56.89.100 587 extendable ip nat inside source static tcp 192.168.0.4 995 94.56.89.100 995 extendable ip nat inside source static tcp 192.168.0.4 3389 94.56.89.100 3389 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.101 443 extendable ip nat inside source static tcp 192.168.0.12 80 94.56.89.102 80 extendable ip nat inside source static tcp 192.168.0.12 443 94.56.89.102 443 extendable ip nat inside source static tcp 192.168.0.12 3389 94.56.89.102 3389 extendable ip route 0.0.0.0 0.0.0.0 213.42.69.41 ! access-list 120 permit ip 192.168.0.0 0.0.0.255 any ! ! ! control-plane ! ! ! line con 0 exec-timeout 5 0 line aux 0 line 2 no activation-character no exec transport preferred none transport input all transport output pad telnet rlogin lapb-ta mop udptn v120 ssh stopbits 1 line vty 0 4 password 7 xx64xxD530D26086Dxx login transport input all ! scheduler allocate 20000 1000 end

    Read the article

  • External USB 3 drive not recognized

    - by ilan123
    Ubuntu 12.10 64 bit seems not to recognize my external hard disk. It is a Vantec NST-310S3 external disk enclosure with a WD 3TB drive. The disk has two NTFS partitions. My PC is a dual boot system. Under Windows 7 the hard disk works fine but I can't make it work with Ubuntu. When the drive is connected to the PC then the command sudo fdisk -l seems to hang forever. Below are the output of lsusb and cat /proc/partitions without the external drive and then with it connected. I added also the last lines of the dmesg command at the end. First without the drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 Second with the USB 3 drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 8 16 2930266584 sdb ilan@linux:~$ lsusb -v -s 004:002 Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Couldn't open device, some information will be missing Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa bcdDevice 1.00 iManufacturer 2 iProduct 3 iSerial 1 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 44 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 ilan@linux:~$ sudo fdisk -l [sudo] password for ilan: Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf1b4f1ee Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1258293247 629043200 7 HPFS/NTFS/exFAT /dev/sda3 1258293248 1992296447 367001600 7 HPFS/NTFS/exFAT /dev/sda4 1992298494 3907028991 957365249 f W95 Ext'd (LBA) /dev/sda5 1992298496 2936016895 471859200 7 HPFS/NTFS/exFAT /dev/sda6 2936018944 3250591743 157286400 7 HPFS/NTFS/exFAT /dev/sda7 3250593792 3898824703 324115456 83 Linux /dev/sda8 3898826752 3907028991 4101120 82 Linux swap / Solaris dmesg output after connecting the external drive: [ 23.740567] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 23.740786] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 49.144673] usb 4-1: >new SuperSpeed USB device number 2 using xhci_hcd [ 49.163039] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 49.166789] usb 4-1: >New USB device found, idVendor=174c, idProduct=55aa [ 49.166793] usb 4-1: >New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 49.166796] usb 4-1: >Product: AS2105 [ 49.166799] usb 4-1: >Manufacturer: ASMedia [ 49.166801] usb 4-1: >SerialNumber: 0123456789ABCDEF [ 49.206372] usbcore: registered new interface driver uas [ 49.228891] Initializing USB Mass Storage driver... [ 49.229042] scsi6 : usb-storage 4-1:1.0 [ 49.229115] usbcore: registered new interface driver usb-storage [ 49.229116] USB Mass Storage support registered. [ 64.045528] scsi 6:0:0:0: >Direct-Access WDC WD30 EZRX-00MMMB0 80.0 PQ: 0 ANSI: 0 [ 64.046224] sd 6:0:0:0: >Attached scsi generic sg2 type 0 [ 64.046881] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.047610] sd 6:0:0:0: >[sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) [ 64.048368] sd 6:0:0:0: >[sdb] Write Protect is off [ 64.048373] sd 6:0:0:0: >[sdb] Mode Sense: 23 00 00 00 [ 64.048984] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.048987] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 64.049297] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.050942] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.050944] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 94.245006] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 94.262553] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 94.263805] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 94.263808] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40 [ 125.262722] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 125.280304] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 125.281511] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 125.281516] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Taleo Enterprise Cloud Service 2012 Specialization · New Specialist Guided Learning Paths Available! · Oracle Taleo Cloud Service 2012 Sales Specialist · Oracle Taleo Cloud Service 2012 PreSales Specialist · Oracle Taleo Cloud Service 2012 Support Specialist · New Specialist Assessments Available! · Oracle Taleo Cloud Service 2012 Sales Specialist Assessment · Oracle Taleo Cloud Service 2012 PreSales Specialist Assessment · Oracle Taleo Cloud Service 2012 Support Specialist Assessment · Coming Soon! - New Certified Implementation Specialist Exam! · Oracle Taleo Cloud Service 2012 Recruiting Certified Implementation Specialist Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team [email protected].

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Taleo Enterprise Cloud Service 2012 Specialization · New Specialist Guided Learning Paths Available! · Oracle Taleo Cloud Service 2012 Sales Specialist · Oracle Taleo Cloud Service 2012 PreSales Specialist · Oracle Taleo Cloud Service 2012 Support Specialist · New Specialist Assessments Available! · Oracle Taleo Cloud Service 2012 Sales Specialist Assessment · Oracle Taleo Cloud Service 2012 PreSales Specialist Assessment · Oracle Taleo Cloud Service 2012 Support Specialist Assessment · Coming Soon! - New Certified Implementation Specialist Exam! · Oracle Taleo Cloud Service 2012 Recruiting Certified Implementation Specialist Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team [email protected].

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Taleo Enterprise Cloud Service 2012 Specialization · New Specialist Guided Learning Paths Available! · Oracle Taleo Cloud Service 2012 Sales Specialist · Oracle Taleo Cloud Service 2012 PreSales Specialist · Oracle Taleo Cloud Service 2012 Support Specialist · New Specialist Assessments Available! · Oracle Taleo Cloud Service 2012 Sales Specialist Assessment · Oracle Taleo Cloud Service 2012 PreSales Specialist Assessment · Oracle Taleo Cloud Service 2012 Support Specialist Assessment · Coming Soon! - New Certified Implementation Specialist Exam! · Oracle Taleo Cloud Service 2012 Recruiting Certified Implementation Specialist Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team [email protected].

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • Make Your Menu Item Highlighted

    - by Shaun
    When I was working on the TalentOn project (Promotion in MSDN Chinese) I was asked to implement a functionality that makes the top menu items highlighted when the currently viewing page was in that section. This might be a common scenario in the web application development I think.   Simple Example When thinking about the solution of the highlighted menu items the biggest problem would be how to define the sections (menu item) and the pages it belongs to rather than making the menu highlighted. With the ASP.NET MVC framework we can use the controller – action infrastructure for us to achieve it. Each controllers would have a related menu item on the master page normally. The menu item would be highlighted if any of the views under this controller are being shown. Some specific menu items would be highlighted of that action was invoked, for example the home page, the about page, etc. The check rule can be specified on-demand. For example I can define the action LogOn and Register of Account controller should make the Account menu item highlighted while the ChangePassword should make the Profile menu item highlighted. I’m going to use the HtmlHelper to render the highlight-able menu item. The key point is that I need to pass the predication to check whether the current view belongs to this menu item which means this menu item should be highlighted or not. Hence I need a delegate as its parameter. The simplest code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace ShaunXu.Blogs.HighlighMenuItem 9: { 10: public static class HighlightMenuItemHelper 11: { 12: public static MvcHtmlString HighlightMenuItem(this HtmlHelper helper, 13: string text, string controllerName, string actionName, object routeData, object htmlAttributes, 14: string highlightText, object highlightHtmlAttributes, 15: Func<HtmlHelper, bool> highlightPredicate) 16: { 17: var shouldHighlight = highlightPredicate.Invoke(helper); 18: if (shouldHighlight) 19: { 20: return helper.ActionLink(string.IsNullOrWhiteSpace(highlightText) ? text : highlightText, 21: actionName, controllerName, routeData, highlightHtmlAttributes == null ? htmlAttributes : highlightHtmlAttributes); 22: } 23: else 24: { 25: return helper.ActionLink(text, actionName, controllerName, routeData, htmlAttributes); 26: } 27: } 28: } 29: } There are 3 groups of the parameters: the first group would be the same as the in-build ActionLink method parameters. It has the link text, controller name and action name, etc passed in so that I can render a valid linkage for the menu item. The second group would be more focus on the highlight link text and Html attributes. I will use them to render the highlight menu item. The third group, which contains one parameter, would be a predicate that tells me whether this menu item should be highlighted or not based on the user’s definition. And then I changed my master page of the sample MVC application. I let the Home and About menu highlighted only when the Index and About action are invoked. And I added a new menu named Account which should be highlighted for all actions/views under its Account controller. So my master would be like this. 1: <div id="menucontainer"> 2:  3: <ul id="menu"> 4: <li><% 1: : Html.HighlightMenuItem( 2: "Home", "Home", "Index", null, null, 3: "[Home]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "Index")%></li> 5:  6: <li><% 1: : Html.HighlightMenuItem( 2: "About", "Home", "About", null, null, 3: "[About]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "About")%></li> 7:  8: <li><% 1: : Html.HighlightMenuItem( 2: "Account", "Account", "LogOn", null, null, 3: "[Account]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Account")%></li> 9: 10: </ul> 11:  12: </div> Note: You need to add the import section for the namespace “ShaunXu.Blogs.HighlighMenuItem” to make the extension method I created below available. So let’s see the result. When the home page was shown the Home menu was highlighted since at this moment it was controller = Home and action = Index. And if I clicked the About menu you can see it turned highlighted as now the action was About. And if I navigated to the register page the Account menu was highlighted since it should be like that when any actions under the Account controller was invoked.   Fluently Language Till now it’s a fully example for the highlight menu item but not perfect yet. Since the most common scenario would be: highlighted when the action invoked, or highlighted when any action was invoked under this controller, we can created 2 shortcut method so for them so that normally the developer will be no need to specify the delegation. Another place we can improve would be, to make the method more user-friendly, or I should say developer-friendly. As you can see when we want to add a highlight menu item we need to specify 8 parameters and we need to remember what they mean. In fact we can make the method more “fluently” so that the developer can have the hints when using it by the Visual Studio IntelliSense. Below is the full code for it. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace Ethos.Xrm.HR 9: { 10: #region Helper 11:  12: public static class HighlightActionMenuHelper 13: { 14: public static IHighlightActionMenuProviderAfterCreated HighlightActionMenu(this HtmlHelper helper) 15: { 16: return new HighlightActionMenuProvider(helper); 17: } 18: } 19:  20: #endregion 21:  22: #region Interfaces 23:  24: public interface IHighlightActionMenuProviderAfterCreated 25: { 26: IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName); 27: } 28:  29: public interface IHighlightActionMenuProviderAfterOn 30: { 31: IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes); 32: } 33:  34: public interface IHighlightActionMenuProviderAfterWith 35: { 36: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate); 37: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch(); 38: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch(); 39: } 40:  41: public interface IHighlightActionMenuProviderAfterHighlightWhen 42: { 43: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText); 44: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes); 45: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText); 46: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass); 47: } 48:  49: public interface IHighlightActionMenuProviderAfterApplyHighlightStyle 50: { 51: MvcHtmlString ToActionLink(); 52: } 53:  54: #endregion 55:  56: public class HighlightActionMenuProvider : 57: IHighlightActionMenuProviderAfterCreated, 58: IHighlightActionMenuProviderAfterOn, IHighlightActionMenuProviderAfterWith, 59: IHighlightActionMenuProviderAfterHighlightWhen, IHighlightActionMenuProviderAfterApplyHighlightStyle 60: { 61: private HtmlHelper _helper; 62:  63: private string _controllerName; 64: private string _actionName; 65: private string _text; 66: private object _routeData; 67: private object _htmlAttributes; 68:  69: private Func<HtmlHelper, bool> _highlightPredicate; 70:  71: private string _highlightText; 72: private object _highlightHtmlAttributes; 73:  74: public HighlightActionMenuProvider(HtmlHelper helper) 75: { 76: _helper = helper; 77: } 78:  79: public IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName) 80: { 81: _actionName = actionName; 82: _controllerName = controllerName; 83: return this; 84: } 85:  86: public IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes) 87: { 88: _text = text; 89: _routeData = routeData; 90: _htmlAttributes = htmlAttributes; 91: return this; 92: } 93:  94: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate) 95: { 96: _highlightPredicate = predicate; 97: return this; 98: } 99:  100: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch() 101: { 102: return HighlightWhen((helper) => 103: { 104: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower(); 105: }); 106: } 107:  108: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch() 109: { 110: return HighlightWhen((helper) => 111: { 112: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower() && 113: helper.ViewContext.RouteData.Values["action"].ToString().ToLower() == _actionName.ToLower(); 114: }); 115: } 116:  117: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText) 118: { 119: _highlightText = highlightText; 120: _highlightHtmlAttributes = highlightHtmlAttributes; 121: return this; 122: } 123:  124: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes) 125: { 126: return ApplyHighlighStyle(highlightHtmlAttributes, _text); 127: } 128:  129: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText) 130: { 131: return ApplyHighlighStyle(new { @class = cssClass }, highlightText); 132: } 133:  134: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass) 135: { 136: return ApplyHighlighStyle(new { @class = cssClass }, _text); 137: } 138:  139: public MvcHtmlString ToActionLink() 140: { 141: if (_highlightPredicate.Invoke(_helper)) 142: { 143: // should be highlight 144: return _helper.ActionLink(_highlightText, _actionName, _controllerName, _routeData, _highlightHtmlAttributes); 145: } 146: else 147: { 148: // should not be highlight 149: return _helper.ActionLink(_text, _actionName, _controllerName, _routeData, _htmlAttributes); 150: } 151: } 152: } 153: } So in the master page when I need the highlight menu item I can “tell” the helper how it should be, just like this. 1: <li> 2: <% 1: : Html.HighlightActionMenu() 2: .On("Index", "Home") 3: .With(SiteMasterStrings.Home, null, null) 4: .HighlightWhenControllerMatch() 5: .ApplyHighlighStyle(new { style = "background:url(../../Content/Images/topmenu_bg.gif) repeat-x;text-decoration:none;color:#feffff;" }) 6: .ToActionLink() %> 3: </li> While I’m typing the code the IntelliSense will advise me that I need a highlight action menu, on the Index action of the Home controller, with the “Home” as its link text and no need the additional route data and Html attributes, and it should be highlighted when the controller was “Home”, and if it’s highlighted the style should be like this and finally render it to me. This is something we call “Fluently Language”. If you had been using Moq you will see that’s very development-friendly, document-ly and easy to read.   Summary In this post I demonstrated how to implement a highlight menu item in ASP.NET MVC by using its controller – action infrastructure. We can see the ASP.NET MVC helps us to organize our web application better. And then I also told a little bit more on the “Fluently Language” and showed how it will make our code better and easy to be used.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Taleo Enterprise Cloud Service 2012 Specialization · New Specialist Guided Learning Paths Available! · Oracle Taleo Cloud Service 2012 Sales Specialist · Oracle Taleo Cloud Service 2012 PreSales Specialist · Oracle Taleo Cloud Service 2012 Support Specialist · New Specialist Assessments Available! · Oracle Taleo Cloud Service 2012 Sales Specialist Assessment · Oracle Taleo Cloud Service 2012 PreSales Specialist Assessment · Oracle Taleo Cloud Service 2012 Support Specialist Assessment · Coming Soon! - New Certified Implementation Specialist Exam! · Oracle Taleo Cloud Service 2012 Recruiting Certified Implementation Specialist Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team [email protected].

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Taleo Enterprise Cloud Service 2012 Specialization · New Specialist Guided Learning Paths Available! · Oracle Taleo Cloud Service 2012 Sales Specialist · Oracle Taleo Cloud Service 2012 PreSales Specialist · Oracle Taleo Cloud Service 2012 Support Specialist · New Specialist Assessments Available! · Oracle Taleo Cloud Service 2012 Sales Specialist Assessment · Oracle Taleo Cloud Service 2012 PreSales Specialist Assessment · Oracle Taleo Cloud Service 2012 Support Specialist Assessment · Coming Soon! - New Certified Implementation Specialist Exam! · Oracle Taleo Cloud Service 2012 Recruiting Certified Implementation Specialist Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team [email protected].

    Read the article

  • Can't get Wireless RT2x00usb driver to work, and can't blacklist it

    - by TheLQ
    After a two year hiatus to Linux, I try it again out again. And then I run into to driver issues... I have an old Linksys WUSB54G v4 Wireless USB Adapter. In previous versions I had to use a combination of Ndiswrapper and Wicd to hope of getting it working. In 10.10, apparently there are built in drivers for it. Unfortunately they don't work. Fails to connect to my WPA network, fails to connect to my open unencrypted network. Wicd fails at "Obtaining IP address" or when using static IPs fails at verifying connectivity to network. Getting fed up I tried the ndiswrapper approach. Installed and configured, but still not working, even when blacklisting the rt2570 module. So for some debugging I added some lines to my /etc/modprobe.d/blacklist.conf file blacklist rt2570 blacklist prism54usb blacklist rt2x00lib blacklist rt2x00usb Restart and find this: lordquackstar@quackbeast:/etc/modprobe.d$ lsmod | grep rt2 rt2500usb 18049 0 rt2x00usb 9779 1 rt2500usb rt2x00lib 27275 2 rt2500usb,rt2x00usb led_class 2633 1 rt2x00lib mac80211 231541 2 rt2x00usb,rt2x00lib cfg80211 144470 2 rt2x00lib,mac80211 Seems to be ignored... Tried this: lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00usb ERROR: Removing 'rt2x00usb': Resource temporarily unavailable lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00lib ERROR: Removing 'rt2x00lib': Resource temporarily unavailable and couldn't connect. Restarted and was back to the same modules loading. Maybe there's something in the log: lordquackstar@quackbeast:/etc/modprobe.d$ tail -n100000 /var/log/syslog | grep rt2 Dec 13 19:01:15 quackbeast kernel: [ 23.698056] Registered led device: rt2500usb-phy0::radio Dec 13 19:01:15 quackbeast kernel: [ 23.698140] Registered led device: rt2500usb-phy0::quality Dec 13 19:01:15 quackbeast kernel: [ 23.701680] usbcore: registered new interface driver rt2500usb Dec 13 19:01:15 quackbeast NetworkManager[855]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Dec 13 19:17:47 quackbeast kernel: [ 23.521759] Registered led device: rt2500usb-phy0::radio Dec 13 19:17:47 quackbeast kernel: [ 23.521824] Registered led device: rt2500usb-phy0::quality Dec 13 19:17:47 quackbeast kernel: [ 23.524740] usbcore: registered new interface driver rt2500usb Dec 13 19:17:47 quackbeast NetworkManager[798]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Seems to be autoloading. So this means that even if I pull it out, remove the module, and get it working, it still won't work when its plugged in all the time. More info: lordquackstar@quackbeast:/etc/modprobe.d$ sudo lshw -C Network *SNIP* *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan0 serial: 00:12:17:9b:f3:1e capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=2.6.35-24-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg USB: lordquackstar@quackbeast:/etc/modprobe.d$ lsusb | grep -i rt Bus 001 Device 003: ID 13b1:000d Linksys WUSB54G v4 802.11g Adapter [Ralink RT2500USB] Any suggestions on how to either fix the rt2x00usb driver or permanently block it from loading? Note that I already have ndiswrapper installed

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    - by Chris Hoffman
    If you don’t have a touchscreen computer and spend all your time on the desktop, Windows 8’s new interface can seem intrusive. Microsoft won’t allow you to disable the new interface, but Classic Shell provides the options Microsoft didn’t. In addition to providing a Start button, Classic Shell can take you straight to the desktop when you log in and disable the hot corners that activate the charms and metro app switcher. There are other programs that do this, but Classic Shell is free and open-source. Many of the alternatives, such as Start8 and RetroUI, are commercial apps that cost money. We’ve covered Classic Shell in the past, but it’s come a long way since then. How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • Oracle University Nouveaux cours (Week 10)

    - by swalker
    Parmi les nouveautés d’Oracle Université de ce mois-ci, vous trouverez : Database RAC & Grid Infrastructure for Oracle Solaris System Administration (1 day) Oracle Database 11g: Performance Tuning (Training On Demand) Development Tools Oracle Database: Program with PL/SQL (Training On Demand) MySQL MySQL for Database Administrators (Training On Demand) Fusion Middleware Oracle SOA Suite 11g : Concepts de base (3 days) Oracle WebCenter Portal 11g: Build Portals With Spaces (3 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle BPM 11g Modeling (3 days) Business Intelligence & Datawarehousing Oracle BI 11g : Mise à niveau et nouvelles fonctionnalités (2 days) Oracle BI Applications 7.9.6: Implementation for Oracle EBS (4 days) Oracle BI Applications 7.9.6: Implementation for Siebel CRM (4 days) Oracle BI 11g R1: Build Repositories (Training on Demand) Fusion Applications Fusion Applications: Extend Applications with ADF (5 days) E-Business Suite R12.x Extend Oracle Applications: Building OA Framework Applications (Training On Demand) PeopleSoft PeopleSoft Integration Tools Rel 8.50 (Training On Demand) Contacter l’équipe locale d’Oracle University pour toute information et dates de cours.

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >