Search Results

Search found 6699 results on 268 pages for 'dual layer'.

Page 82/268 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Dispatcher.CheckAccess() isn't working from my console application, is there a better way.

    - by zimmer62
    I wrote an application in WPF / VB and separated the business logic and UI into different projects. The business layer uses a serial port which runs on a different thread, Now that I'm trying to write a command line interface for the same business layer, it seems to fail when .Invoke() is called. (no error, just doesn't work) I'm pretty sure the reason I had to add in checkaccess and .invoke was because I have collections that would be changed during processing the serial port data and wanted the NotifyCollectionChanged to be handled by WPF data binding. (The reason I'm not 100% sure is because it was months ago I wrote that part and it all worked great from the GUI, now adding the console app has made me rethink some of this) I would like my business layer to run these processes on the thread they were created, I need this to work from both my GUI version and the command line version. Am I misusing the Dispatcher in my business layer? Is there a better way to handle an event from the serial port, and then return to the main thread to processes the data?

    Read the article

  • Why are they putting "processors" on hard drives?

    - by Celeritas
    What does it mean when they have a processor on the hard drive, how does it work, and what benfit does it have? I don't understand - the CPU is the processor and the hard drive transfers it's contents to RAM. Do have additional processors, preprocess the data some how? Here's some examples Western Digital WD Black WD1002FAEX 1TB "Dual processor speed" NETGEAR ReadyNAS 312 2-Bay Diskless Network Attached Storage "Dual-core Intel 2.1GHz processor and 2GB on-board memory" and routers now have processors too, why's that nescecary? I guess it sort of makes sense - some logic needs to happen for the packets to be read in to know which ports to send them out on, but why did old routers not need them? Example or wireless router with processor: "Dual-core processor"

    Read the article

  • Which type of DVI cable will connect the 9400GT with this display?

    - by Senthil
    I am going to buy BenQ G2220HD 22" LCD display I already have an EVGA GeForce 9400GT. Both support DVI. But the confusion came when I wanted to buy a DVI cable. I am confused with DVI and DVI-D and DVI-I and then Single link and Dual link! Can someone tell me whether this cable can be used to connect my 9400GT to my BenQ display? If not, what kind of DVI cable should I buy - DVI, DVI-D, single link, dual link, DVI-I?? Basically, geforce docs say it is a dual link dvi capable one. my benq display docs say it is dvi-d. what are my options now? what kind of dvi cable should I get? Please don't give me brands, because most of them won't be here in India. Just tell me the TYPE of DVI cable.

    Read the article

  • Performance difference between MacBook Pro (2.8 GHz) vs Air (1.7 GHz)?

    - by jonathanconway
    I'm comparing these two Apple laptops: MacBook Pro (13", 2011 model): 2.8GHz dual-core Intel Core i7 processor with 4MB shared L3 cache 4GB (two 2GB SO-DIMMs) of 1333MHz DDR3 SDRAM AMD Radeon HD 6770M graphics processor with 1GB of GDDR5 memory on 2.4GHz configuration MacBook Air (13", 2011 model): 1.7GHz dual-core Intel Core i5 with 3MB shared L3 cache 4GB of 1333MHz DDR3 onboard memory Intel HD Graphics 3000 processor with 384MB of DDR3 SDRAM shared with main memory There's definitely a gap between them in terms of CPU speed and graphics, but what practical difference would this make on a day-to-day basis? On the one hand, I love the sleek, thin appearance of the Air. On the other hand, I don't want a machine that's going to be dog-slow when doing tasks such as running Virtual Machines, dual-booting to Windows and running multiple instances of Visual Studio, and maybe some light gaming. Is there going to be a major difference that makes the MacBook Pro a more attractive purchase?

    Read the article

  • DVI-D Splitter Not Working with GeForce 8400gs

    - by jimdrang
    I have a GeForce 8400gs and it has a DVI and VGA port on the back. I was using dual monitors with one VGA and one DVI cable. I wanted both displays to be digital so I bought a DVI-D splitter and put one DVI cable in each monitor, connected them to the splitter and put the single merged connection in the back of the cards DVI connection. It will not recognize the second monitor (I'm not even sure how it determined which one was the first monitor). The tech specs state that it supports "Two dual-link DVI outputs for digital flat panel display resolutions up to 2560x1600" http://www.nvidia.com/object/geforce_8400_tech_specs.html. Do I need a different converter or is my only option for dual monitors with this card one VGA, one DVI?

    Read the article

  • Difference between “system-on-chip” and “CPU”

    - by Tim
    Very confused, in some websites, they have this line: iPhone 5s CPU: Apple A7 other websites saying that: iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core other sources saying that iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core Apple 7 In Wikipedia, it said: The Apple A7 is a 64-bit system on a chip (SoC) designed by Apple Inc. It first appeared in the iPhone 5S, which was introduced on September 10, 2013. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor, the Apple A6. While not the first 64-bit ARM CPU, it is the first to ship in a consumer smartphone or tablet computer. There are 2 sentences: The Apple A7 is a 64-bit system on a chip (SoC) and While not the first 64-bit ARM CPU Wikipedia also said “The A7 features an Apple-designed 64-bit 1.3–1.4 GHz ARMv8-A dual-core CPU, called Cyclone”. So System on chip is also CPU? very confused

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • Oracle Linux Delivers Top CPU Benchmark Results on Sun Blades

    - by sergio.leunissen
    From the Performance and Best Practices blog: Fresh SPEC CPU2006 results for Sun Blade X6275 M2 Server Modules running Oracle Linux 5.5. The highlights: The dual-node Sun Blade X6275 M2 server module, equipped with two Intel Xeon X5670 2.93 GHz processors per node and running the Oracle Enterprise Linux 5.5 operating system delivered the best SPECint_rate2006 and SPECfp_rate2006 benchmark results for all systems with Intel Xeon processor 5000 sequence. With a SPECint_rate2006 benchmark result of 679, the Sun Blade X6275 M2 server module, with two compute nodes per blade, delivers maximum performance for space constrained environments. Comparing Oracle's dual-node blade to HP's dual-node blade server, based on their single node performance, the Sun Blade X6275 M2 server module SPECfp_rate2006 score of 241 outperforms the best published HP ProLiant BL2X220c G5 server score by 3.2x. A single node of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 37% improvement in SPECint_rate2006 benchmark results and 22% improvement in SPECfp_rate2006 benchmark results compared to the previous generation Sun Blade X6275 server module. Both nodes of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 59% improvement on the SPECint_rate2006 benchmark and 40% improvement on the SPECfp_rate2006 benchmark compared to the previous generation Sun Blade X6275 server module.

    Read the article

  • Huwawei E220 broadband dongle not working

    - by Roshnal
    The problem is, I have used Ubuntu from 6.10 upto 11.10. And upto 11.04 I used my same USB 3G dongle to connect to broadband and it worked fine. But 2 days ago I upgraded to Ubuntu 11.10 and broadband is not working. It detects my dongle and creates a connection without a problem, but when connecting it just keeps on and on for a long time and then say I'm offline. So I did a clean install and the same thing occurred. But I also have a netbook and its got Ubuntu 11.10. I tried using the same dongle for internet on that netbook and it worked fine without any issues. But this isn't a problem with my USB port on my main machine or something like that because I'm also using Windows on my main machine (dual-boot) and its working fine. My hardware: Main computer (one that I'm trying to connect): 2.8GHz dual core Intel 2GB RAM 500GB Sata II HDD 384MB Video Memory (Intel G31/G33 chipset) Ubuntu 11.10 (32bit) My NetBook (broadband working fine): 1.6GHz Intel Atom (dual core) 1GB DDR3 RAM 250GB HDD 128MB Video Memory (Intel something-I-can't-remember) Ubuntu 11.10 (32bit) My dongle is a Huawei E220 and ISP is Dialog GSM (I'm in Sri Lanka) So any idea why this is? I really love Ubuntu and this is bugging me off.. Any help greatly appreciated. Regards, Roshnal

    Read the article

  • I receive the error 'grub-install /dev/sda failed' while attempting to install Ubuntu as the computer's only OS.

    - by Liath
    I am attempting to install Ubuntu on a box which was previously running Windows 7. I have also experienced the dreaded "Unable to install GRUB" error. I am not attempting to dual boot. I have previously run a Windows boot disk and removed all existing partitions. If I run the Ubuntu 12.04 install CD and click install after the config screens, I get the error Executing 'grub-install /dev/sda' failed. This is a fatal error. (It is the same error as this question: Unable to install GRUB) All the questions I've read while looking for a solution are related to dual boot. I'm not interested in dual boot, I'm after a clean out the box Ubuntu install. How can I achieve this? (For my sanity, please use very simple instructions when responding. I don't claim to have any talent either for linux or as a sysadmin) Additional details copied from comments dated: 2012-05-29 ~15:19Z After booting from the CD, clicking Try Ubuntu, and then sudo fdisk /dev/sda I get fdisk: unable to seek on /dev/sda: Invalid argument sudo fdisk /dev/sdb gives Device contains neither a valid DOS partiion table, nor Sun, SGI or OSF disklabel. Building a new DOS disklabel with disk identifier 0x15228d1d. Changes will remain in memory only until you decide to write them. After that of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite). Command (m for help): I should add the Live CD desktop is graphically bad. I've got missing parts of programs and the terminal occasionally reflects to the bottom of the screen. But I can't imagine this is related.

    Read the article

  • ubuntu 12.04 installer does not recognize drive partitions

    - by endless forms
    I recently purchased a new HP Pavilion HPE desktop running Windows 7. I am trying to install a dual-boot system with 12.04. However, when I run the LiveCD I only get as far as the "Install" window where you can select the partitions for your drives. On the bottom where it says "device for boot loader installation" I have "/dev/sda" and cannot select any other devices. All the options to change the drives are greyed out, most likely because there are no drives in the window. I partitioned my largest drive using the tools within Windows, then booted into the CD, but nothing shows up. I then used Gparted to change the new space from unallocated to an /ext2, and still nothing shows up. The installer does not recognize anything, but when I go into an Ubuntu session and use the disk utility manager I can see the partitions I made. Anything I do has to be done outside of the installer. I have no files on this new computer, so this is the perfect time to install a parallel OS. I would like avoid completely reinstalling Windows, however. I've been over the forums many times, but all the answers I've found have not worked for me. I also tried flagging the new, empty partition as boot, but that screwed Windows up. Also, the WUBI installer hits the same point and quits. I know that the disk itself is fine because I just made another dual boot system on a Gateway PC. This makes me think something within this computer is preventing the installer from "seeing" the drives. Any help would be much appreciated! Edit in response: The main part of the partitioning window shows no partitions, everything is blank. There is no way to add partitions, and all the buttons are useless. I've tried defragging my drive multiple times, and I also used the same disk to dual-boot another PC with no problems, so it's not the disk, it's definitely the computer.

    Read the article

  • Get vertex colors from fbx (OpenGL, FBX SDK)

    - by instancedName
    I'm kinda stuck with this one. I managed to get vertex positions, indices, normals, but I don't quite understand how te get vertex colors. I need them to fill my buffer. I tried funcion mesh-GetElementVertexColorCount() and then to iterate trough all of them, but it returns zero. I alse tried to get layer, and then use layer-GetVertexColors(), but it returns NULL pointer. Can anyone help me with this one?

    Read the article

  • ANGLE wined3d in reverse

    <b>Wine-Reviews:</b> "Were happy to announce a new open source project called Almost Native Graphics Layer Engine, or ANGLE for short. The goal of ANGLE is to layer WebGLs subset of the OpenGL ES 2.0 API over DirectX 9.0c API calls."

    Read the article

  • Network Balancing Act

    - by listey
    Next up in our popular Oracle Solaris How To series is the Integrated Load Balancer, part of the suite of network facilities that are built in to Oracle Solaris 11. Providing Layer 3 and Layer 4 load balancing capabilities you can use this device to simulate or even replace your hardware based network infrastructure. Read more about the capabilities and how to get a basic configuration working in How to Set Up a Load-Balanced Application Across Two Oracle Solaris Zones

    Read the article

  • What is the current "standard" for setting up a development environment that supports remote collaboration as well as secure version control?

    - by Andrew
    What is the current "standard" for setting up a development environment that supports remote collaboration as well as secure version control? Considering a virtual dedicated solution with vm for a web layer and a data layer, using VPN for each programmer. We're a small start-up that do both Microsoft and open-source development. Is there a set software tools or packages that are appropriate for a small shop and yet scalable? Thanks.

    Read the article

  • Are factors such as Intellisense support and strong typing enough to justify the use of an 'Anaemic Domain Model'?

    - by David Osborne
    It's easy to accept that objects should be used in all layers except a layer nominated as a data layer. However, it's just as easy to end-up with an 'anaemic domain model' that is just an object representation of data with no real functionality ( http://martinfowler.com/bliki/AnemicDomainModel.html ). However, using objects in this fashion brings the benefit of factors such as Intellisense support, strong typing, readability, discoverability, etc. Are these factors strong arguments for an otherwise, anaemic domain model?

    Read the article

  • Superclass Sensitive Actions

    - by Geertjan
    I've created a small piece of functionality that enables you to create actions for Java classes in the IDE. When the user right-clicks on a Java class, they will see one or more actions depending on the superclass of the selected class. To explain this visually, here I have "BlaTopComponent.java". I right-click on its node in the Projects window and I see "This is a TopComponent": Indeed, when you look at the source code of "BlaTopComponent.java", you'll see that it implements the TopComponent class. Next, in the screenshot below, you see that I have right-click a different class. In this case, there's an action available because the selected class implements the ActionListener class. Then, take a look at this one. Here both TopComponent and ActionListener are superclasses of the current class, hence both the actions are available to be invoked: Finally, here's a class that subclasses neither TopComponent nor ActionListener, hence neither of the actions that I created for doing something that relates to TopComponents or ActionListeners is available, since those actions are irrelevant in this context: How does this work? Well, it's a combination of my blog entries "Generic Node Popup Registration Solution" and "Showing an Action on a TopComponent Node". The cool part is that the definition of the two actions that you see above is remarkably trivial: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JOptionPane; import org.openide.loaders.DataObject; import org.openide.util.Utilities; public class TopComponentSensitiveAction implements ActionListener { private final DataObject context; public TopComponentSensitiveAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { //Do something with the context: JOptionPane.showMessageDialog(null, "TopComponent: " + context.getNodeDelegate().getDisplayName()); } } The above is the action that will be available if you right-click a Java class that extends TopComponent. This, in turn, is the action that will be available if you right-click a Java class that implements ActionListener: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JOptionPane; import org.openide.loaders.DataObject; import org.openide.util.Utilities; public class ActionListenerSensitiveAction implements ActionListener { private final DataObject context; public ActionListenerSensitiveAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { //Do something with the context: JOptionPane.showMessageDialog(null, "ActionListener: " + context.getNodeDelegate().getDisplayName()); } } Indeed, the classes, at this stage are the same. But, depending on what I want to do with TopComponents or ActionListeners, I now have a starting point, which includes access to the DataObject, from where I can get down into the source code, as shown here. This is how the two ActionListeners that you see defined above are registered in the layer, which could ultimately be done via annotations on the ActionListeners, of course: <folder name="Actions"> <folder name="Tools"> <file name="org-netbeans-sbas-impl-TopComponentSensitiveAction.instance"> <attr stringvalue="This is a TopComponent" name="displayName"/> <attr name="instanceCreate" methodvalue="org.netbeans.sbas.SuperclassSensitiveAction.create"/> <attr name="type" stringvalue="org.openide.windows.TopComponent"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.TopComponentSensitiveAction"/> </file> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"> <attr stringvalue="This is an ActionListener" name="displayName"/> <attr name="instanceCreate" methodvalue="org.netbeans.sbas.SuperclassSensitiveAction.create"/> <attr name="type" stringvalue="java.awt.event.ActionListener"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.ActionListenerSensitiveAction"/> </file> </folder> </folder> <folder name="Loaders"> <folder name="text"> <folder name="x-java"> <folder name="Actions"> <file name="org-netbeans-sbas-impl-TopComponentSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/Tools/org-netbeans-sbas-impl-TopComponentSensitiveAction.instance"/> <attr intvalue="150" name="position"/> </file> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/Tools/org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"/> <attr intvalue="160" name="position"/> </file> </folder> </folder> </folder> </folder> The most important parts of the layer registration are the lines that are highlighted above. Those lines connect the layer to the generic action that delegates back to the action listeners defined above, as follows: public final class SuperclassSensitiveAction extends AbstractAction implements ContextAwareAction { private final Map map; //This method is called from the layer, via "instanceCreate", //magically receiving a map, which contains all the attributes //that are defined in the layer for the file: static SuperclassSensitiveAction create(Map map) { return new SuperclassSensitiveAction(Utilities.actionsGlobalContext(), map); } public SuperclassSensitiveAction(Lookup context, Map m) { super(m.get("displayName").toString()); this.map = m; String superclass = m.get("type").toString(); //Enable the menu item only if //we're dealing with a class of type superclass: JavaSource javaSource = JavaSource.forFileObject( context.lookup(DataObject.class).getPrimaryFile()); try { javaSource.runUserActionTask(new ScanTask(this, superclass), true); } catch (IOException ex) { Exceptions.printStackTrace(ex); } //Hide the menu item if it isn't enabled: putValue(DynamicMenuContent.HIDE_WHEN_DISABLED, true); } @Override public void actionPerformed(ActionEvent ev) { ActionListener delegatedAction = (ActionListener)map.get("delegate"); delegatedAction.actionPerformed(ev); } @Override public Action createContextAwareInstance(Lookup actionContext) { return new SuperclassSensitiveAction(actionContext, map); } private class ScanTask implements Task<CompilationController> { private SuperclassSensitiveAction action = null; private String superclass; private ScanTask(SuperclassSensitiveAction action, String superclass) { this.action = action; this.superclass = superclass; } @Override public void run(final CompilationController info) throws Exception { info.toPhase(Phase.ELEMENTS_RESOLVED); new EnableIfGivenSuperclassMatches(info, action, superclass).scan( info.getCompilationUnit(), null); } } private static class EnableIfGivenSuperclassMatches extends TreePathScanner<Void, Void> { private CompilationInfo info; private final AbstractAction action; private final String superclassName; public EnableIfGivenSuperclassMatches(CompilationInfo info, AbstractAction action, String superclassName) { this.info = info; this.action = action; this.superclassName = superclassName; } @Override public Void visitClass(ClassTree t, Void v) { Element el = info.getTrees().getElement(getCurrentPath()); if (el != null) { TypeElement te = (TypeElement) el; List<? extends TypeMirror> interfaces = te.getInterfaces(); if (te.getSuperclass().toString().equals(superclassName)) { action.setEnabled(true); } else { action.setEnabled(false); } for (TypeMirror typeMirror : interfaces) { if (typeMirror.toString().equals(superclassName)){ action.setEnabled(true); } } } return null; } } } This is a pretty cool solution and, as you can see, very generic. Create a new ActionListener, register it in the layer so that it maps to the generic class above, and make sure to set the type attribute, which defines the superclass to which the action should be sensitive.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 4 - A 3 Layered Approach to the Entity Framework

    In this article, Vince suggests a pattern to use when developing a three layered application using the Entity Framework 4. After providing a short introduction he demonstrates the creation of the database, data access layer, business logic layer, and a web form. He does so with the help of detailed explanations, source code examples and related screenshots. He also examines how to select records to load a Drop Down List, including adding, editing and deleting records.

    Read the article

  • Is Innovation Dead?

    - by wulfers
    My question is has innovation died?  For large businesses that do not have a vibrant, and fearless leadership (see Apple under Steve jobs), I think is has.  If you look at the organizational charts for many of the large corporate megaliths you will see a plethora of middle managers who are so risk averse that innovation (any change involves risk) is choked off since there are no innovation champions in the middle layers.  And innovation driven top down can only happen when you have a visionary in the top ranks, and that is also very rare.So where is actual innovation happening, at the bottom layer, the people who live in the trenches…   The people who live for a challenge. So how can big business leverage this innovation layer?  Remove the middle management layer.   Provide an innovation champion who has an R&D budget and is tasked with working with the bottom layer of a company, the engineers, developers  and business analysts that live on the edge (Where the corporate tires meet the road). Here are two innovation failures I will tell you about, and both have been impacted by a company so risk averse it is starting to fail in its primary business ventures: This company initiated an innovation process several years ago.  The process was driven companywide with team managers being the central points of collection of innovative ideas.  These managers were given no budget to do anything with these ideas.  There was no process or incentive for these managers to drive it about their team.  This lasted close to a year and the innovation program slowly slipped into oblivion…. A second example:  This same company failed an attempt to market a consumer product in a line where there was already a major market leader.  This product was under development for several years and needed to provide some major device differentiation form the current market leader.  This same company had a large Lead Technologist community made up of real innovators in all areas of technology.  Did this same company leverage the skills and experience of this internal community,   NO!!! So to wrap this up, if large companies really want to survive, then they need to start acting like a small company.  Support those innovators and risk takers!  Reward them by implementing their innovative ideas.  Champion (from the top down) innovation (found at the bottom) in your companies.  Remember if you stand still you are really falling behind.Do it now!  Take a risk!

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >