Search Results

Search found 22357 results on 895 pages for 'virtual member manager'.

Page 216/895 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • How do I bridge my wireless to my wired connection?

    - by gatoatigrado
    I want to bridge the wireless connection with the wired connection. The wireless is the host and the wired is the client, so to speak. Internet sharing (inet <--> wifi <--> ethernet) I tried to bridge my ethernet connection by going to network manager edit connections wired edit IPv4 settings shared to other computers Screenshots However, it seems to automatically disconnect half a second after it says "connection established"! edit 2 got the network manager logs, it seems the address is in use. See http://pastebin.com/DjqRshxW , line 45. nm-tool output is here: http://pastebin.com/x5Aci5V1 . I tried firestarter, as mentioned in another thread, and no luck. I don't have time to bother with a dozen command line tricks, unless is copy & pasting a shell script... so please suggest ways that use GUIs and/or won't leave my computer in a confused state (e.g. disabling network manager, manually connecting to a WPA network, installing brutils, etc.). edit: one idea that would work, if it's possible -- is there a way to share connections via SSH and SOCKS5? I'd need to do this at a system-wide level though; I only know how to do it through the browser now. Then, I could run ifconfig eth0 192.168.4.1 on the computer sharing the inet, and ifconfig eth0 192.168.4.2 on the computer I'm trying to share with; I know this does work for host-to-host transfers. edit 3 If I run sudo killall dnsmasq, then nothing is using the 10.42.43.1 address the network manager sharing wants to use. But now it just takes longer to die, with error "NetworkManager[5935]: dnsmasq died with signal 9" [ http://pastebin.com/4FNtpugi ]. Looking at just the commands [ http://pastebin.com/1vrtQeWk ], maybe it's trying to route eth0 to itself? I'm not that familiar with networking things though.

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

  • Disable touchpad tap to click on Oneiric ocelot

    - by AWE
    You've heard this a million times but the "tap to click" is a pain in the behind and I want to disable it. There is no touchpad in gpointing-device-settings and neither in mouse and touchpad in system settings. I've tried some commands in terminal but it's all crap. Dconf-editor doesn't react. How about solving this once and for all? xinput list: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys id=14 [slave keyboard (3)]

    Read the article

  • ALPS touchpad on DELL Inspiron I15RN-3647BK with Ubuntu 11.10 x64

    - by Miguel
    I can't make work the multitouch of the ALPS touchpad in my Dell Inspiron I15RN-3647BK with Ubuntu 11.10 x64, kernel 3.0.0-17-generic... I have tried with this driver but it doesn't work http://people.canonical.com/~sforshee/alps-touchpad/psmouse-alps-0.10/psmouse-alps-dkms_0.10_all.deb. This is the result from the following command: user@laptop:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys That came with or without the alps driver from canonical repository and the touchpad just works like a PS2 generic mouse... Any ideas? Thanks in advance...

    Read the article

  • How can I configure the touchpad and keyboard settings on a Dell Inspiron 5110?

    - by Robik
    I am using ubuntu 11.10. I want the following 3 things: I have dell inspiron 5110 laptop. There is button at the top right corner of laptop which can be used for turning the screen off. It works in windows but it does not work in ubuntu 11.10. Even in the manual of the laptop, it the button is supported only in windows. Is there a way to activate it in ubuntu 11.10? Some of the keys like: "break" etc. are missing. Can I use other keys (or combinations of other keys) to function as those missing keys? In the program, "mouse and touchpad", there is no tab for touchpad. I want to enable vertical and horizontal scrolling. How do I do that? The command: xinput list shows Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys Please help!!

    Read the article

  • Partner Spotlight: Compasso

    - by Joe Diemer
    EVERY MONDAY at 2 PM Eastern Time (USA) Join Compasso, one of Oracle's Enterprise Manager partners that is Specialized in Application Quality Management, for a series of presentations focused on Oracle end to end performance management. The objective is to provide a comprehensive overview of Oracle's Enterprise Manager 12c software.  Attendees will learn how Oracle can provide a complete end to end management  solution, for not only for Oracle Databases, but also other aspects of the Oracle technology, such as Middleware and Oracle applications.  Key areas include: 1. Next-Generation Management Framework which cover: Better performance and scalability Modular, extensible architecture Easier to manage and diagnose Enhanced security Web 2.0 UI 2. Enhanced Application-to-Disk Management End-to-end application performance management Fusion Applications management Exadata and Exalogic management 3. Complete Lifecycle Management for Enterprise Private Cloud Self-service provisioning Policy based resource and workload management Chargeback There are two ways you can sign up: 1. Call Nicole Wakefield of Compasso at +1 (971) 245-5042 2. Email [email protected] with Subject Line of "Monday Enterprise Manager Presentations" WebEx Conference details will be provided with your confirmation For more information about partner opportunities with Enterprise Manager, including becoming Specialized, click here. For more information about Compasso, click here.

    Read the article

  • ???????????????????1(?????? ???)

    - by rika.tokumichi
    ??????OTN????????? ????????????????????????/????????????Developers Summit?(????)????????! ?????????????????????????????????????????????????????????????????????^^ -------------------------------------------- ???:????(??????????????????????????????) ???:??????(Fusion Middleware?????? Fusion Middleware???????? ??????????????) -------------------------------------------- ??????????????????????????? ??????????????2?????????·????????? ???????????????··· ??????????? ??????????OS???????Java?????(Oracle JRockit Virtual Edition)??? ?????????????1??????????????????????·????(Oracle Automatic Storage Management: Oracle ASM)??? ?3????? ???????????????????????????????? ????????????? ?Oracle JRockit Virtual Edition? ???????????????? ????????????????? ------------------- ¦??????????? ????????????????????????????????????????????????????????????????·???????????????????????????????????????????(??) ????·?????????????????????????????IaaS?PaaS?SaaS?????????????????????????????????????????????????·???????????·???? (???????????????) ??????????···· >????????????? ¦Oracle JRockit Virtual Edition ?Oracle JRockit???Java????? (JVM)?Oracle JRockit JVM?????????????????????·???????????????????????????·???????????JVM?Oracle JRockit Real Time???JVM???/??/????????·????Oracle JRockit Mission Control??????? (??) ??Oracle JRockit?????????????????Oracle JRockit Virtual Edition?(JRockit VE) ???????? ??????·??????????????????????????????????????????Java?????????????????????????H/W?Java?????????????????????????OS?JVM?????/????????3?????????????··· >????????????? ------------------- ???SlideShare???????????????????????????? >SlideShare??????????????2?????????·????????????? >Developers Summit 2010(????2010)??Web??? ???????????????????·?????????????????????????????????????! ?????!

    Read the article

  • nhibernate fatal error

    - by Afif Lamloumi
    i have an error ( System.InvalidCastException: Unable to cast object of type 'AccountProxy' to type 'System.String'.) when i did this code i mapped the tables( Account,AccountString,EventData,...) of the the database opengts ( open source) i have this error when i called a function from EventData.cs IQuery query = session.CreateQuery("FROM Eventdata"); IList pets = query.List(); return pets; the Stack Trace: [InvalidCastException: Impossible d'effectuer un cast d'un objet de type 'AccountProxy' en type 'System.String'.] (Object , Object[] , SetterCallback ) +431 NHibernate.Bytecode.Lightweight.AccessOptimizer.SetPropertyValues(Object target, Object[] values) +20 NHibernate.Tuple.Component.PocoComponentTuplizer.SetPropertyValues(Object component, Object[] values) +49 NHibernate.Type.ComponentType.SetPropertyValues(Object component, Object[] values, EntityMode entityMode) +34 NHibernate.Type.ComponentType.ResolveIdentifier(Object value, ISessionImplementor session, Object owner) +150 NHibernate.Type.ComponentType.NullSafeGet(IDataReader rs, String[] names, ISessionImplementor session, Object owner) +42 NHibernate.Loader.Loader.GetKeyFromResultSet(Int32 i, IEntityPersister persister, Object id, IDataReader rs, ISessionImplementor session) +93 NHibernate.Loader.Loader.GetRowFromResultSet(IDataReader resultSet, ISessionImplementor session, QueryParameters queryParameters, LockMode[] lockModeArray, EntityKey optionalObjectKey, IList hydratedObjects, EntityKey[] keys, Boolean returnProxies) +92 NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) +675 NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) +129 NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters) +116 [GenericADOException: could not execute query [ select eventdata0_.deviceID as deviceID5_, eventdata0_.timestamp as timestamp5_, eventdata0_.statusCode as statusCode5_, eventdata0_.accountID as accountID5_, eventdata0_.latitude as latitude5_, eventdata0_.longitude as longitude5_, eventdata0_.gpsAge as gpsAge5_, eventdata0_.speedKPH as speedKPH5_, eventdata0_.heading as heading5_, eventdata0_.altitude as altitude5_, eventdata0_.transportID as transpo11_5_, eventdata0_.inputMask as inputMask5_, eventdata0_.outputMask as outputMask5_, eventdata0_.address as address5_, eventdata0_.DataSource as DataSource5_, eventdata0_.rawdata as rawdata5_, eventdata0_.distanceKM as distanceKM5_, eventdata0_.odometerKM as odometerKM5_, eventdata0_.geozoneIndex as geozone19_5_, eventdata0_.geozoneID as geozoneID5_, eventdata0_.creationTime as creatio21_5_ from eventdata eventdata0_ ] [SQL: select eventdata0_.deviceID as deviceID5_, eventdata0_.timestamp as timestamp5_, eventdata0_.statusCode as statusCode5_, eventdata0_.accountID as accountID5_, eventdata0_.latitude as latitude5_, eventdata0_.longitude as longitude5_, eventdata0_.gpsAge as gpsAge5_, eventdata0_.speedKPH as speedKPH5_, eventdata0_.heading as heading5_, eventdata0_.altitude as altitude5_, eventdata0_.transportID as transpo11_5_, eventdata0_.inputMask as inputMask5_, eventdata0_.outputMask as outputMask5_, eventdata0_.address as address5_, eventdata0_.DataSource as DataSource5_, eventdata0_.rawdata as rawdata5_, eventdata0_.distanceKM as distanceKM5_, eventdata0_.odometerKM as odometerKM5_, eventdata0_.geozoneIndex as geozone19_5_, eventdata0_.geozoneID as geozoneID5_, eventdata0_.creationTime as creatio21_5_ from eventdata eventdata0_]] NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters) +213 NHibernate.Loader.Loader.ListIgnoreQueryCache(ISessionImplementor session, QueryParameters queryParameters) +18 NHibernate.Loader.Loader.List(ISessionImplementor session, QueryParameters queryParameters, ISet`1 querySpaces, IType[] resultTypes) +79 NHibernate.Hql.Ast.ANTLR.Loader.QueryLoader.List(ISessionImplementor session, QueryParameters queryParameters) +51 NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.List(ISessionImplementor session, QueryParameters queryParameters) +231 NHibernate.Engine.Query.HQLQueryPlan.PerformList(QueryParameters queryParameters, ISessionImplementor session, IList results) +369 NHibernate.Impl.SessionImpl.List(String query, QueryParameters queryParameters, IList results) +317 NHibernate.Impl.SessionImpl.List(String query, QueryParameters parameters) +282 NHibernate.Impl.QueryImpl.List() +163 DATA1.EventdataExtensions.GetEventdata() in C:\Users\HP\Desktop\our_project\DATA1\Queries\Eventdata.cs:33 MvcApplication7.Controllers.HistoriqueController.Index() in C:\Users\HP\Desktop\our_project\MvcApplication7\Controllers\HistoriqueController.cs:17 lambda_method(Closure , ControllerBase , Object[] ) +62 System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) +17 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +208 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +27 System.Web.Mvc.<>c__DisplayClass15.<InvokeActionMethodWithFilters>b__12() +55 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +263 System.Web.Mvc.<>c__DisplayClass17.<InvokeActionMethodWithFilters>b__14() +19 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +191 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +343 System.Web.Mvc.Controller.ExecuteCore() +116 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +97 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +10 System.Web.Mvc.<>c__DisplayClassb.<BeginProcessRequest>b__5() +37 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +21 System.Web.Mvc.Async.<>c__DisplayClass8`1.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +62 System.Web.Mvc.<>c__DisplayClasse.<EndProcessRequest>b__d() +50 System.Web.Mvc.SecurityUtil.<GetCallInAppTrustThunk>b__0(Action f) +7 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +22 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +60 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +9 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8841105 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Any suggestions? how can correct this error Data entity class (outtake from comment): public class MyClass { public virtual string DeviceID { get; set; } public virtual int Timestamp { get; set; } public virtual string Account { get; set; } public virtual int StatusCode { get; set; } public virtual double Latitude { get; set; } public virtual double Longitude { get; set; } public virtual int GpsAge { get; set; } public virtual double SpeedKPH { get; set; } public virtual double Heading { get; set; } public override bool Equals(object obj) { return true; } public override int GetHashCode() { return 0; } }

    Read the article

  • How can I Include Multiples Tables in my linq to entities eager loading using mvc4 C#

    - by EBENEZER CURVELLO
    I have 6 classes and I try to use linq to Entities to get the SiglaUF information of the last deeper table (in the view - MVC). The problem is I receive the following error: "The ObjectContext instance has been disposed and can no longer be used for operations that require a connection." The view is like that: > @model IEnumerable<DiskPizzaDelivery.Models.EnderecoCliente> > @foreach (var item in Model) { > @Html.DisplayFor(modelItem => item.CEP.Cidade.UF.SiglaUF) > } The query that i use: var cliente = context.Clientes .Include(e => e.Enderecos) .Include(e1 => e1.Enderecos.Select(cep => cep.CEP)) .SingleOrDefault(); The question is: How Can I improve this query to pre loading (eager loading) "Cidade" and "UF"? See below the classes: public partial class Cliente { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int IdCliente { get; set; } public string Email { get; set; } public string Senha { get; set; } public virtual ICollection<EnderecoCliente> Enderecos { get; set; } } public partial class EnderecoCliente { public int IdEndereco { get; set; } public int IdCliente { get; set; } public string CEPEndereco { get; set; } public string Numero { get; set; } public string Complemento { get; set; } public string PontoReferencia { get; set; } public virtual Cliente Cliente { get; set; } public virtual CEP CEP { get; set; } } public partial class CEP { public string CodCep { get; set; } public string Tipo_Logradouro { get; set; } public string Logradouro { get; set; } public string Bairro { get; set; } public int CodigoUF { get; set; } public int CodigoCidade { get; set; } public virtual Cidade Cidade { get; set; } } public partial class Cidade { public int CodigoCidade { get; set; } public string NomeCidade { get; set; } public int CodigoUF { get; set; } public virtual ICollection<CEP> CEPs { get; set; } public virtual UF UF { get; set; } public virtual ICollection<UF> UFs { get; set; } } public partial class UF { public int CodigoUF { get; set; } public string SiglaUF { get; set; } public string NomeUF { get; set; } public int CodigoCidadeCapital { get; set; } public virtual ICollection<Cidade> Cidades { get; set; } public virtual Cidade Cidade { get; set; } } var cliente = context.Clientes .Where(c => c.Email == email) .Where(c => c.Senha == senha) .Include(e => e.Enderecos) .Include(e1 => e1.Enderecos.Select(cep => cep.CEP)) .SingleOrDefault(); Thanks!

    Read the article

  • SPARC T5-4 LDoms for RAC and WebLogic Clusters

    - by Jeff Taylor-Oracle
    I wanted to use two Oracle SPARC T5-4 servers to simultaneously host both Oracle RAC and a WebLogic Server Cluster. I chose to use Oracle VM Server for SPARC to create a cluster like this: There are plenty of trade offs and decisions that need to be made, for example: Rather than configuring the system by hand, you might want to use an Oracle SuperCluster T5-8 My configuration is similar to jsavit's: Availability Best Practices - Example configuring a T5-8 but I chose to ignore some of the advice. Maybe I should have included an  alternate service domain, but I decided that I already had enough redundancy Both Oracle SPARC T5-4 servers were to be configured like this: Cntl 0.25  4  64GB                     App LDom                    2.75 CPU's                                        44 cores                                          704 GB              DB LDom      One CPU         16 cores         256 GB   The systems started with everything in the primary domain: # ldm list NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME primary          active     -n-c--  UART    512   1023G    0.0%  0.0%  11m # ldm list-spconfig factory-default [current] primary # ldm list -o core,memory,physio NAME              primary           CORE     CID    CPUSET     0      (0, 1, 2, 3, 4, 5, 6, 7)     1      (8, 9, 10, 11, 12, 13, 14, 15)     2      (16, 17, 18, 19, 20, 21, 22, 23) -- SNIP     62     (496, 497, 498, 499, 500, 501, 502, 503)     63     (504, 505, 506, 507, 508, 509, 510, 511) MEMORY     RA               PA               SIZE                 0x30000000       0x30000000       255G     0x80000000000    0x80000000000    256G     0x100000000000   0x100000000000   256G     0x180000000000   0x180000000000   256G # Give this memory block to the DB LDom IO     DEVICE                           PSEUDONYM        OPTIONS     pci@300                          pci_0                pci@340                          pci_1                pci@380                          pci_2                pci@3c0                          pci_3                pci@400                          pci_4                pci@440                          pci_5                pci@480                          pci_6                pci@4c0                          pci_7                pci@300/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE1     pci@300/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE2     pci@300/pci@1/pci@0/pci@4/pci@0/pci@c /SYS/MB/SASHBA0     pci@300/pci@1/pci@0/pci@4/pci@0/pci@8 /SYS/RIO/NET0        pci@340/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE3     pci@340/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE4     pci@380/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE9     pci@380/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE10     pci@3c0/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE11     pci@3c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE12     pci@400/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE5     pci@400/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE6     pci@440/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE7     pci@440/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE8     pci@480/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE13     pci@480/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE14     pci@4c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE15     pci@4c0/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE16     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c /SYS/MB/SASHBA1     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4 /SYS/RIO/NET2    Added an additional service processor configuration: # ldm add-spconfig split # ldm list-spconfig factory-default primary split [current] And removed many of the resources from the primary domain: # ldm start-reconf primary # ldm set-core 4 primary # ldm set-memory 32G primary # ldm rm-io pci@340 primary # ldm rm-io pci@380 primary # ldm rm-io pci@3c0 primary # ldm rm-io pci@400 primary # ldm rm-io pci@440 primary # ldm rm-io pci@480 primary # ldm rm-io pci@4c0 primary # init 6 Needed to add resources to the guest domains: # ldm add-domain db # ldm set-core cid=`seq -s"," 48 63` db # ldm add-memory mblock=0x180000000000:256G db # ldm add-io pci@480 db # ldm add-io pci@4c0 db # ldm add-domain app # ldm set-core 44 app # ldm set-memory 704G  app # ldm add-io pci@340 app # ldm add-io pci@380 app # ldm add-io pci@3c0 app # ldm add-io pci@400 app # ldm add-io pci@440 app Needed to set up services: # ldm add-vds primary-vds0 primary # ldm add-vcc port-range=5000-5100 primary-vcc0 primary Needed to add a virtual network port for the WebLogic application domain: # ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR lo0               loopback   ok           --         --    lo0/v4         static     ok           --         ...    lo0/v6         static     ok           --         ... net0              ip         ok           --         ...    net0/v4        static     ok           --         xxx.xxx.xxx.xxx/24    net0/v6        addrconf   ok           --         ....    net0/v6        addrconf   ok           --         ... net8              ip         ok           --         --    net8/v4        static     ok           --         ... # dladm show-phys LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE net1              Ethernet             unknown    0      unknown   ixgbe1 net0              Ethernet             up         1000   full      ixgbe0 net8              Ethernet             up         10     full      usbecm2 # ldm add-vsw net-dev=net0 primary-vsw0 primary # ldm add-vnet vnet1 primary-vsw0 app Needed to add a virtual disk to the WebLogic application domain: # format Searching for disks...done AVAILABLE DISK SELECTIONS:        0. c0t5000CCA02505F874d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02505f874           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD0/disk        1. c0t5000CCA02506C468d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02506c468           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD1/disk        2. c0t5000CCA025067E5Cd0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca025067e5c           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD2/disk        3. c0t5000CCA02506C258d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02506c258           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD3/disk Specify disk (enter its number): ^C # ldm add-vdsdev /dev/dsk/c0t5000CCA02506C468d0s2 HDD1@primary-vds0 # ldm add-vdisk HDD1 HDD1@primary-vds0 app Add some additional spice to the pot: # ldm set-variable auto-boot\\?=false db # ldm set-variable auto-boot\\?=false app # ldm set-var boot-device=HDD1 app Bind the logical domains: # ldm bind db # ldm bind app At the end of the process, the system is set up like this: # ldm list -o core,memory,physio NAME             primary          CORE     CID    CPUSET     0      (0, 1, 2, 3, 4, 5, 6, 7)     1      (8, 9, 10, 11, 12, 13, 14, 15)     2      (16, 17, 18, 19, 20, 21, 22, 23)     3      (24, 25, 26, 27, 28, 29, 30, 31) MEMORY     RA               PA               SIZE                0x30000000       0x30000000       32G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@300                          pci_0               pci@300/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE1     pci@300/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE2     pci@300/pci@1/pci@0/pci@4/pci@0/pci@c /SYS/MB/SASHBA0     pci@300/pci@1/pci@0/pci@4/pci@0/pci@8 /SYS/RIO/NET0   ------------------------------------------------------------------------------ NAME             app              CORE     CID    CPUSET     4      (32, 33, 34, 35, 36, 37, 38, 39)     5      (40, 41, 42, 43, 44, 45, 46, 47)     6      (48, 49, 50, 51, 52, 53, 54, 55)     7      (56, 57, 58, 59, 60, 61, 62, 63)     8      (64, 65, 66, 67, 68, 69, 70, 71)     9      (72, 73, 74, 75, 76, 77, 78, 79)     10     (80, 81, 82, 83, 84, 85, 86, 87)     11     (88, 89, 90, 91, 92, 93, 94, 95)     12     (96, 97, 98, 99, 100, 101, 102, 103)     13     (104, 105, 106, 107, 108, 109, 110, 111)     14     (112, 113, 114, 115, 116, 117, 118, 119)     15     (120, 121, 122, 123, 124, 125, 126, 127)     16     (128, 129, 130, 131, 132, 133, 134, 135)     17     (136, 137, 138, 139, 140, 141, 142, 143)     18     (144, 145, 146, 147, 148, 149, 150, 151)     19     (152, 153, 154, 155, 156, 157, 158, 159)     20     (160, 161, 162, 163, 164, 165, 166, 167)     21     (168, 169, 170, 171, 172, 173, 174, 175)     22     (176, 177, 178, 179, 180, 181, 182, 183)     23     (184, 185, 186, 187, 188, 189, 190, 191)     24     (192, 193, 194, 195, 196, 197, 198, 199)     25     (200, 201, 202, 203, 204, 205, 206, 207)     26     (208, 209, 210, 211, 212, 213, 214, 215)     27     (216, 217, 218, 219, 220, 221, 222, 223)     28     (224, 225, 226, 227, 228, 229, 230, 231)     29     (232, 233, 234, 235, 236, 237, 238, 239)     30     (240, 241, 242, 243, 244, 245, 246, 247)     31     (248, 249, 250, 251, 252, 253, 254, 255)     32     (256, 257, 258, 259, 260, 261, 262, 263)     33     (264, 265, 266, 267, 268, 269, 270, 271)     34     (272, 273, 274, 275, 276, 277, 278, 279)     35     (280, 281, 282, 283, 284, 285, 286, 287)     36     (288, 289, 290, 291, 292, 293, 294, 295)     37     (296, 297, 298, 299, 300, 301, 302, 303)     38     (304, 305, 306, 307, 308, 309, 310, 311)     39     (312, 313, 314, 315, 316, 317, 318, 319)     40     (320, 321, 322, 323, 324, 325, 326, 327)     41     (328, 329, 330, 331, 332, 333, 334, 335)     42     (336, 337, 338, 339, 340, 341, 342, 343)     43     (344, 345, 346, 347, 348, 349, 350, 351)     44     (352, 353, 354, 355, 356, 357, 358, 359)     45     (360, 361, 362, 363, 364, 365, 366, 367)     46     (368, 369, 370, 371, 372, 373, 374, 375)     47     (376, 377, 378, 379, 380, 381, 382, 383) MEMORY     RA               PA               SIZE                0x30000000       0x830000000      192G     0x4000000000     0x80000000000    256G     0x8080000000     0x100000000000   256G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@340                          pci_1               pci@380                          pci_2               pci@3c0                          pci_3               pci@400                          pci_4               pci@440                          pci_5               pci@340/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE3     pci@340/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE4     pci@380/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE9     pci@380/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE10     pci@3c0/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE11     pci@3c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE12     pci@400/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE5     pci@400/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE6     pci@440/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE7     pci@440/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE8 ------------------------------------------------------------------------------ NAME             db               CORE     CID    CPUSET     48     (384, 385, 386, 387, 388, 389, 390, 391)     49     (392, 393, 394, 395, 396, 397, 398, 399)     50     (400, 401, 402, 403, 404, 405, 406, 407)     51     (408, 409, 410, 411, 412, 413, 414, 415)     52     (416, 417, 418, 419, 420, 421, 422, 423)     53     (424, 425, 426, 427, 428, 429, 430, 431)     54     (432, 433, 434, 435, 436, 437, 438, 439)     55     (440, 441, 442, 443, 444, 445, 446, 447)     56     (448, 449, 450, 451, 452, 453, 454, 455)     57     (456, 457, 458, 459, 460, 461, 462, 463)     58     (464, 465, 466, 467, 468, 469, 470, 471)     59     (472, 473, 474, 475, 476, 477, 478, 479)     60     (480, 481, 482, 483, 484, 485, 486, 487)     61     (488, 489, 490, 491, 492, 493, 494, 495)     62     (496, 497, 498, 499, 500, 501, 502, 503)     63     (504, 505, 506, 507, 508, 509, 510, 511) MEMORY     RA               PA               SIZE                0x80000000       0x180000000000   256G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@480                          pci_6               pci@4c0                          pci_7               pci@480/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE13     pci@480/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE14     pci@4c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE15     pci@4c0/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE16     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c /SYS/MB/SASHBA1     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4 /SYS/RIO/NET2   Start the domains: # ldm start app LDom app started # ldm start db LDom db started Make sure to start the vntsd service that was created, above. # svcs -a | grep ldo disabled        8:38:38 svc:/ldoms/vntsd:default online          8:38:58 svc:/ldoms/agents:default online          8:39:25 svc:/ldoms/ldmd:default # svcadm enable vntsd Now use the MAC address to configure the Solaris 11 Automated Installation. Database Logical Domain # telnet localhost 5000 {0} ok devalias screen                   /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@7/display@0 disk7                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p3 disk6                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p2 disk5                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p1 disk4                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p0 scsi1                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0 net3                     /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0,1 net2                     /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0 virtual-console          /virtual-devices/console@1 name                     aliases {0} ok boot net2 Boot device: /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0  File and args: 1000 Mbps full duplex Link up Requesting Internet Address for xx:xx:xx:xx:xx:xx Requesting Internet Address for xx:xx:xx:xx:xx:xx WLS Logical Domain # telnet localhost 5001 {0} ok devalias hdd1                     /virtual-devices@100/channel-devices@200/disk@0 vnet1                    /virtual-devices@100/channel-devices@200/network@0 net                      /virtual-devices@100/channel-devices@200/network@0 disk                     /virtual-devices@100/channel-devices@200/disk@0 virtual-console          /virtual-devices/console@1 name                     aliases {0} ok boot net Boot device: /virtual-devices@100/channel-devices@200/network@0  File and args: Requesting Internet Address for xx:xx:xx:xx:xx:xx Requesting Internet Address for xx:xx:xx:xx:xx:xx Repeat the process for the second SPARC T5-4, install Solaris, RAC and WebLogic Cluster, and you are ready to go. Maybe buying a SuperCluster would have been easier.

    Read the article

  • Why does JAXB not create a member variable in its generated code when an XML schema base type and subtype have the same element declared in them?

    - by belltower
    I have a question regarding with regard to JAXB generated classes. As you can I see, I have a complex type, DG_PaymentIdentification1, declared in my schema. Its a restriction of PaymentIdentification1. DG_PaymentIdentification1 is also identical to PaymentIdentification1. I also have a type called DG_CreditTransferTransactionInformation10 which has a base type of CreditTransferTransactionInformation10 and is identical to it. I have included the relevant XML schema snippets below. <xs:complexType name="DG_PaymentIdentification1"> <xs:complexContent> <xs:restriction base="PaymentIdentification1"> <xs:sequence> <xs:element name="InstrId" type="DG_Max35Text_REF" minOccurs="0"/> <xs:element name="EndToEndId" type="DG_Max35Text_REF" id="DG-41"/> </xs:sequence> </xs:restriction> </xs:complexContent> </xs:complexType> <xs:complexType name="PaymentIdentification1"> <xs:sequence> <xs:element name="InstrId" type="Max35Text" minOccurs="0"/> <xs:element name="EndToEndId" type="Max35Text"/> </xs:sequence> </xs:complexType> <xs:complexType name="DG_CreditTransferTransactionInformation10"> <xs:complexContent> <xs:restriction base="CreditTransferTransactionInformation10"> <xs:sequence> <xs:element name="PmtId" type="DG_PaymentIdentification1"/> <xs:simpleType name="DG_Max35Text_REF"> <xs:restriction base="DG_NotEmpty35"> <xs:pattern value="[\-A-Za-z0-9\+/\?:\(\)\.,'&#x20;]*"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="Max35Text"> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="35"/> </xs:restriction> </xs:simpleType> JAXB generates the following java class for DG_PaymentIdentification1: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "DG_CreditTransferTransactionInformDGion10") public class DGCreditTransferTransactionInformation10 extends CreditTransferTransactionInformation10 { } My question is why doesnt the DGCreditTransferTransactionInformation10 generated class have a variable of type DG_PaymentIdentification1 in the generated code? The base class CreditTransferTransactionInformation10 does have a type PaymentIdentification1 declared in it. Is there any way of ensuring that DGCreditTransferTransactionInformation10 will have a DG_PaymentIdentification1 in it?

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to deal with different programming styles in a team?

    - by user3287
    We have a small dev team (only 3 developers) and we recently got a new team member. While he is a smart coder, his coding style is completely different from ours. Our existing code base contains mostly readable, clean and maintainable code, but the new team member is quickly changing many files, introducing ugly hacks and shortcuts, using defines all over the place, adding functions in the wrong places, etc. My question is if others have experienced such a situation before, and if anyone has tips on how to talk to him.

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • Don't Miss At Devoxx!!!

    - by Yolande Poirier
    Come by IoT Hack Fest which starts with the session: kickstart your Raspberry Pi and/or Leap Motion project, part II on Tuesday from 9:30am to 12:00pm to learn how to start a project with the Raspberry Pi and Leap Motion. In the afternoon, you can still join a project and create your own project with the help of experts on Raspberry Pi, Leap Motion and other boards.  At the Oracle booth, Java experts will be available  to answer your  questions and demo the new features of the Java Platform, including Java Embedded, JavaFX, Java SE and Java EE. This year, the chess game that was first demoed at JavaOne keynotes last September will be showcased at Devoxx.  Duke is coming to Devoxx this year. You can get your picture taken with Duke on Tuesday, Wednesday and Thursday (Nov. 12-14) from 12:00 to 18:00 Beer bash will be Tuesday from 17:30-19:30 and Wednesday/Thursday from 18:00 to 20:00 at the booth. Oracle is raffling off five Raspberry Pi's and a number of books every day. Make sure to stop by and get your badge scanned to enter the raffle. Raffles are Tuesday at 19:15 and Wednesday/Thursday at 19:45 at the Oracle booth.  The main conference sessions from Oracle Java experts are:  Wednesday 13 November Beyond Beauty: JavaFX, Parallax, Touch, Raspberry Pi, Gyroscopes, and Much More Angela Caicedo, Senior Member, Technical Staff, Oracle Room 7, 12:00–13:00 Lambda: A Peek Under the Hood, Brian Goetz, Software Architect, Oracle Room 8, 12:00–13:00 In Full Flow: Java 8 Lambdas in the Stream, Paul Sandoz, Software Developer, Oracle Room 8, 14:00–15:00 The Modular Java Platform and Project Jigsaw, Mark Reinhold, Chief Architect, Java Platform Group, Oracle, Room 8, 15:10–16:10 The Curious Case of JavaScript on the JVM, Attila Szegedi, Principal Member, Technical Staff, Oracle, Room 5, 16:40–17:40 Is It a Car? Is It a Computer? No, It’s a Raspberry Pi JavaFX Informatics System. Simon Ritter, Principal Technology Evangelist, Oracle Room 7, 16:40–17:40 Thursday 14 November Java EE 7: What’s New in the Java EE Platform Linda DeMichiel, Consulting Member, Technical Staff, Oracle, Room 8, 10:50–11:50 Java Microbenchmark Harness: The Lesser of the Two Evils, Aleksey Shipilev, Principal Member, Technical Staff, Oracle. Room 6, 14:00–15:00 Practical Restful Persistence, Shaun Smith, Senior Principal Product Manager, Oracle Room 8, 17:50–18:50 Friday 15 November Avatar.js, Server-Side JavaScript on the Java Platform, Jean-Francois Denise, Software Developer, Oracle Room 8, 11:50–12:50

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

  • Appointment & Booking Calendar for 2,500 Members

    - by D. K. Mason
    For this job I need a booking or appointment calendar for the WP website, so that each of the 2500+ members can manage his/her own appointments calendar. Members will set their schedules and available hours. Buyers can select one member, and book when they want to visit the member and reserve time online. Lets say our WP membership site has thousands of members each offering one service. We have 25 categories. On their profile page is his appointment or booking calendar, along with his personally created and uploaded (into S3) profile video. Website visitors should be able to easily book hours & days as desired. For now all members have a free membership. WE earn $ by bringing customers to the registered members and collecting one dollar for our service. Therefore, we need an affordable script and one that handles our members needs. We already have a paid copy of Jrox. What are those needs? Well, please register for a test account in the ENTERTAINMENT category at http://asianhighway26.com/?page_id=140 Next, pretend you are a buyer and start on the index page and select your ENTERTAINMENT category. Click on image #3 and you will receive a list of others in your traveling area. When you click VIEW DETAILS, there is where a potential customer will see the calendar and if you are available on the dates and times you are interested in doing business. We know that YOU the member will want to list your not available dates and hours. There may be other features desired but this is the most important, we believe. Our WP administrator (me) can install the main script, but each member must have some login or dashboard to manage his/her own calendar and work hours. Can you create or modify any appointment calendar software to do this? Can the jam.jrox.com software we own do this? As you can see here, we offer to pay for ideas you present that we use: asianhighway26.com/?page_id=126 D.K. Mason Chief Development Officer, Asian Highway Network, Tourism Division, Asia-Pacific Region (http://www.forums.doctormason.us/b-ah/)

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • What's the best way to expose a Model object in a ViewModel?

    - by Angel
    In a WPF MVVM application, I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel. Instead of creating separate VM properties, I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy class: public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel: public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM. I know this will cause a strong dependency. So: What's the best way to expose a Model object in a ViewModel? What's the best way to use DataService in VM?

    Read the article

  • MVVM- Expose Model object in ViewModel

    - by Angel
    I have a wpf MVVM application , I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel , and instead of creating seperate VM properties , I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy classes. Here is my Model class : public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel , public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM, I know this will cause a strong dependency. So , 1- Whats the best way to expose Model object in ViewModel ? 2- Whats the best way to use DataService in VM ?

    Read the article

  • Multi-Part Map Troubleshooting

    - by Michael Stephenson
    Scenario I came across a nice little one with multi-part maps the other day. I had an orchestration where I needed to combine 4 input messages into one output message like in the below table:   Input Messages Output Messages Company Details Member Details Event Message Member Search Member Import   I thought my orchestration was working fine but for some reason when I was trying to send my message it had no content under the root node like below <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange>   My map is displayed in the below picture. I knew that the member search message may not have any elements under it but its root element would always exist. The rest of the messages were expected to be fully populated. I tried a number of different things and testing my map outside of the orchestration it always worked fine. The Eureka Moment The eureka moment came when I was looking at the xslt produced by the map. Even though I'd tried swapping the order of the messages in the input of the map you can see in the below picture that the first part of the processing of the message (with the red circle around it) is doing a for-each over the GetCompanyDetailsResult element within the GetCompanyDetailsResponse message. This is because the processing is driven by the output message format and the first element to output is the OrganisationID which comes from the GetCompanyDetailsResponse message. At this point I could focus my attention on this message as the xslt shows that if this xpath statement doesn’t return the an element from the GetCompanyDetailsResponse message then the whole body of the output message will not be produced and the output from the map would look like the message I was getting. <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange> I was quickly able to prove this in my map test which proved this was a likely candidate for the problem. I revisited the orchestration focusing on the creation of the GetCompanyDetailsResponse message and there was actually a bug in the orchestration which resulted in the message being incorrectly created, once this was fixed everything worked as expected. Conclusion Originally I thought it was a problem with the map itself, and looking online there wasn’t really much in the way of content around troubleshooting for multi-part map problems so I thought I'd write this up. I guess technically it isn't a multi-part map problem, but I spend a good couple of hours the other day thinking it was.

    Read the article

  • libgdx - #iterator() cannot be used nested

    - by TimSim
    I'm getting this error when I try to check if any of the targets overlap each other: iterTargets = targets.iterator(); while (iterTargets.hasNext()) { Target target = iterTargets.next(); for (Target otherTarget:targets) { if (target.rectangle.overlaps(otherTarget.rectangle)) { // do something } } } So I can't do that? How am I supposed to check each member of an array to see if it overlaps any other member?

    Read the article

  • DNS protocol message example

    - by virtual-lab
    hello there, I am trying to figure out how to send out DNS messages from an application socket adapter to a DNSBL. I spent the last two days understanding the basics, including experimenting with WireShark to catch an example of message exchanged. Now I would like to query the DNS without using dig or host command (I'm using Ubuntu); how can I perform this action at low level, without the help of these tools in wrapping the request in a proper DNS message format? How the message should be post it? Hex or String? Thanks in advance for any help. Regards Alessandro Ilardo Comment added I am investigating on JDev and Oracle SOA. The platform provides a Socket Adapter which simply apply a transformation (XSLT) and send the message straight to the socket. How the payload parameters (ex. the host I'm looking up) are wrapped within the message is left to the developer. So basically I have an idea on how the all DNS message is structured, but rather than put everything on JDev stright away I'd like to make some tests on my own just to make sure I got a valid message format. So, I am not using any specific language (I don't even understand why they moved my question from serverfault) and I don't want to use any tools which would hide part of the message, such as the header. I know they work well btw. I guess this stuff has something to do with packet injection. Someone suggested me to use telnet, but I've only used for SMTP or HTTP, I haven't got a clue on how it works for DNS request. Does it make more sense now?

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >