Search Results

Search found 9122 results on 365 pages for 'cartesian product'.

Page 160/365 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Weblogic WLST classpath

    - by user43736
    When I run the WLST .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar: /directory/ols_wls/utils/config/10.3/config-launch.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar: /directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar: /directory/ols_wls/wlserver_10.3/server/lib/webservices.jar: /directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar: /directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin: /directory/ols_wls/modules/org.apache.ant_1.7.0/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/bin: /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin: /directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Can't connect to wi-fi hotspot in Ubuntu 11.10

    - by ht3t
    I'm new to Ubuntu. I'm having a wireless network problem in Ubuntu 11.10. I made a hotspot using Connectify from a computer which is running Windows 7. I can access it in Windows 7 but not in Ubuntu 11.10. Every time I access it,I get a message "disconnected". I'm using msi fx 400 notebook with Intel Centrino wireless -N 1000 wireless card. Ubuntu version is 11.10 with KDE desktop. $ sudo lshw -c network [sudo] password for ht3t: *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:56:b8:f0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:e7400000-e7401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 40:61:86:b6:b1:a2 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw IP=192.168.21.107 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:9000(size=256) memory:e6004000-e6004fff memory:e6000000-e6003fff I can't do anything without internet connection. How can I fix this?

    Read the article

  • Error when Eclipse started and now my package explorer is empty!

    - by carpenteri
    Friends, Just a quick introduction, I'm currently learning Java, using a combination of the Head First Java book and Eclipse. Everything was going well until tonight! When I started up Eclipse tonight, I saw an error message which I didn't pay attention to (I know! I know!) and acknowledged after which the project explorer was empty where it used to contain my Head First project! After a quick "google" I found the workspace.metadata.log and the errors are shown below. The version of Eclipse I am using is: 20100218-1602 and the only plugin that I use is egit. Any help would be much appreciated. Thanks !SESSION 2010-06-08 19:24:33.841 ----------------------------------------------- eclipse.buildId=unknown java.version=1.5.0_22 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_GB Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.ui.workbench 4 2 2010-06-08 19:24:36.475 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench". !STACK 1 org.eclipse.ui.WorkbenchException: Content is not allowed in prolog. at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:121) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) ... 6 more !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:41.442 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'OpenTypeHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 6 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:41.442 !MESSAGE Problems reading information from XML 'OpenTypeHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:50.435 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'QualifiedTypeNameHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 25 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:50.435 !MESSAGE Problems reading information from XML 'QualifiedTypeNameHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311)

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Ubuntu 10.04 recognizing USB 2.0 external HD as USB 1.1

    - by btucker
    When I connect the USB 2.0 drive I see this: usb 1-4.3: new full speed USB device using ohci_hcd and address 5 so I know it's getting seen as USB 1.1. usb-devices shows that it really is USB 2.0 and connected to a USB 2.0 hub: T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=05e3 ProdID=0608 Rev=77.61 S: Product=USB2.0 Hub C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=13fd ProdID=1340 Rev=02.10 S: Manufacturer=Generic S: Product=External C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage It seems the problem is that root hub is: T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=02.06 S: Manufacturer=Linux 2.6.32-25-server ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub And there's no mention of ehci_hcd. lsusb -t gives me: /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ohci_hcd/10p, 12M |__ Port 4: Dev 2, If 0, Class=hub, Driver=hub/4p, 12M |__ Port 2: Dev 4, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 3: Dev 5, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 6: Dev 3, If 0, Class=stor., Driver=usb-storage, 12M It seems like I'm missing something which would allow the OS to see USB 2.0 devices. Can anyone point me in the right direction? EDIT Full lsusb -v output: Bus 001 Device 005: ID 13fd:1340 Initio Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x13fd Initio Corporation idProduct 0x1340 bcdDevice 2.10 iManufacturer 1 Generic iProduct 2 External iSerial 3 57442D574341595930323337 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x05e3 Genesys Logic, Inc. idProduct 0x0608 USB-2.0 4-Port HUB bcdDevice 77.61 iManufacturer 0 iProduct 1 USB2.0 Hub iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x00e0 Ganged power switching Ganged overcurrent protection Port indicators bPwrOn2PwrGood 50 * 2 milli seconds bHubContrCurrent 100 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0103 power enable connect Port 3: 0000.0103 power enable connect Port 4: 0000.0100 power Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.32-25-server ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:02.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 11 bDescriptorType 41 nNbrPorts 10 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 0x00 PortPwrCtrlMask 0xff 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0103 power enable connect Port 5: 0000.0100 power Port 6: 0000.0103 power enable connect Port 7: 0000.0100 power Port 8: 0000.0100 power Port 9: 0000.0100 power Port 10: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled

    Read the article

  • Diagramming Software for a Developer/Designer

    - by Craig Walker
    For a long time I've been looking for a good diagramming/vector-based drawing program that meets my needs as a developer. I'd like to: Draw database diagrams Draw flow charts Draw object-modeling diagrams (UML being the standard) Draw other free-form diagrams (basically boxes & arrows with the occasional clipart) Draw mockups of user interfaces and web pages EDIT: I want good-looking electronic-format diagrams that I can show to 3rd parties, not just something for my own internal use. EDIT 2: I'm also looking for Windows software, although I'm toying with the idea of switching to Mac, so a really good Mac-only product might get me to switch. Basically I need a good vector graphic program (with decent grouping, connecting lines, and ideally auto-routing). I'd prefer a diagramming tool that can also be used for drawing (for the UI mockups) rather than a drawing tool that can also be used for diagrams. I've tried Visio on several occasions, and every time I've been disappointed. The interface always seems to get in my way at some point. It's pretty close to what I want, and the latest version (I got the trail from MS) seems to be better than previous ones in terms of usability, but I really don't want to plunk down that sort of cash for a mediocre product. I've tried Dia and Inkscape, and while initially promising and with the right price tag, I found both of them to be lacking in several ways (including some recurring bugs). I've toyed with getting Adobe Illustrator, but I've never used it before, and I have a feeling that it wouldn't handle the diagramming aspect very well, and I don't want to buy a copy just to find out it doesn't meet my needs. So far, the product that I've had the most success with is, sadly, OpenOffice Draw. It's free of course (which lowers my expectations and thus improves my view of it) and its usability is pretty good, but in the end I'd like something more suited to diagramming. I'm willing to spend real money (in the $500-$1K range) for a really good piece of software if it does everything I want it to. The front runner is of course Visio but I'm hoping for more. Does anybody have any recommendations? CONCLUSION: @dlamblin had the most informative post, but the part I gained the most from was his/her (and others) mention of OmniGraffle, not Gliffy. I gave Gliffy a try, and it seemed neet for occational use, but since it's a Flash app (note: not AJAX as dlamblin mentioned) it's still a bit of a pain to use (no keyboard shortcuts for copy/paste was pretty much a deal breaker for me). I also tried SmartDraw, but it had 3-strikes-you're-out against it: The trial period was only 7 days long. It used some nonstandard (and visually jarring) GUI widget toolkit for its UI. At the very least it makes me suspicious (how do I know it will actually work & support the standard Windows features?) It crashed on me early into my trial. OmniGraffle looks like exactly what I want... except that it's Mac-only (so I couldn't give it a try). However, it got good reviews from my Mac-owning coworker, and I hope to try it on a friend's Mac soon. If it's good enough then I might spring for a new MacBook.

    Read the article

  • Raytracing (LoS) on 3D hex-like tile maps

    - by herenvardo
    Greetings, I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map: (I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.) I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant). Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile. That's enough context; my question is: Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross? That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on. BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles. I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( . Thanks in advance for any answer

    Read the article

  • ASP.net roles and Projects

    - by Zyphrax
    EDIT - Rewrote my original question to give a bit more information Background info At my work I'm working on a ASP.Net web application for our customers. In our implementation we use technologies like Forms authentication with MembershipProviders and RoleProviders. All went well until I ran into some difficulties with configuring the roles, because the roles aren't system-wide, but related to the customer accounts and projects. I can't name our exact setup/formula, because I think our company wouldn't approve that... What's a customer / project? Our company provides management information for our customers on a yearly (or other interval) basis. In our systems a customer/contract consists of: one Account: information about the Company per Account, one or more Products: the bundle of management information we'll provide per Product, one or more Measurements: a period of time, in which we gather and report the data Extranet site setup Eventually we want all customers to be able to access their management information with our online system. The extranet consists of two sites: Company site: provides an overview of Account information and the Products Measurement site: after selecting a Measurement, detailed information on that period of time The measurement site is the most interesting part of the extranet. We will create submodules for new overviews, reports, managing and maintaining resources that are important for the research. Our Visual Studio solution consists of a number of projects. One web application named Portal for the basis. The sites and modules are virtual directories within that application (makes it easier to share MasterPages among things). What kind of roles? The following users (read: roles) will be using the system: Admins: development users :) (not customer related, full access) Employees: employees of our company (not customer related, full access) Customer SuperUser: top level managers (full access to their account/measurement) Customer ContactPerson: primary contact (full access to their measurement(s)) Customer Manager: a department manager (limited access, specific data of a measurement) What about ASP.Net users? The system will have many ASP.Net users, let's focus on the customer users: Users are not shared between Accounts SuperUser X automatically has access to all (and new) measurements User Y could be Primary contact for Measurement 1, but have no role for Measurement 2 User Y could be Primary contact for Measurement 1, but have a Manager role for Measurement 2 The department managers are many individual users (per Measurement), if Manager Z had a login for Measurement 1, we would like to use that login again if he participates in Measurement 2. URL structure These are typical urls in our application: http://host/login - the login screen http://host/project - the account/product overview screen (measurement selection) http://host/project/1000 - measurement (id:1000) details http://host/project/1000/planning - planning overview (for primary contact/superuser) http://host/project/1000/reports - report downloads (manager department X can only access report X) We will also create a document url, where you can request a specific document by it's GUID. The system will have to check if the user has rights to the document. The document is related to a Measurement, the User or specific roles have specific rights to the document. What's the problem? (finally ;)) Roles aren't enough to determine what a user is allowed to see/access/download a specific item. It's not enough to say that a certain navigation item is accessible to Managers. When the user requests Measurement 1000, we have to check that the user not only has a Manager role, but a Manager role for Measurement 1000. Summarized: How can we limit users to their accounts/measurements? (remember superusers see all measurements, some managers only specific measurements) How can we apply roles at a product/measurement level? (user X could be primarycontact for measurement 1, but just a manager for measurement 2) How can we limit manager access to the reports screen and only to their department's reports? All with the magic of asp.net classes, perhaps with a custom roleprovider implementation. Similar Stackoverflow question/problem http://stackoverflow.com/questions/1367483/asp-net-how-to-manage-users-with-different-types-of-roles

    Read the article

  • MATLAB intersection of 2 surfaces

    - by caglarozdag
    Hi everyone, I consider myself a beginner in MATLAB so bear with me if the answer to my question is an obvious one. Phi=0:pi/100:2*pi; Theta=0:pi/100:2*pi; [PHI,THETA]=meshgrid(Phi,Theta); R=(1 + cos(PHI).*cos(PHI)).*(1 + cos(THETA).*cos(THETA)); [X,Y,Z]=sph2cart(THETA,PHI,R); surf(X,Y,Z); %display hold on; x1=-4:.1:4; [X1,Y1] = meshgrid(x1); a=1.8; b=0; c=3; d=0; Z1=(d- a * X1 - b * Y1)/c; shading flat; surf(X1,Y1,Z1); I have written this code which plots a 3d cartesian plot of a plane intersecting a peanut shaped object at an angle. I need to get the intersection of these on 2D (going to be the outline of a peanut, but a bit skewed since the intersection happens at an angle), but don't know how. Thanks

    Read the article

  • NHibernate returning duplicate object in child collections when using Fetch

    - by UpTheCreek
    When doing a query like this (using Nhibernate 2.1.2): ICriteria criteria = session.CreateCriteria<MyRootType>() .SetFetchMode("ChildCollection1", FetchMode.Eager) .SetFetchMode("ChildCollection2", FetchMode.Eager) .Add(Restrictions.IdEq(id)); I am getting multiple duplicate objects in some cartesian fashion. E.g. if ChildCollection1 has 3 elements, and ChildColection2 has 2 elements then I get results with each element in ChildColection1 one duplicated, and each element in ChildColection2 triplicated! This was a bit of a WTF moment for me... So how to do this correctly? Is using SetFetchMode like this only supported when specifying one collection? Am I just using it wrong (I've seen some references to results transformers, but imagined this would be simplier). Is this something that's different in NH3? Update: As per Felice's suggestion, I tried using the DistinctRootEntity transformer, but this is still returning duplicates. Code: ICriteria criteria = session.CreateCriteria<MyRootType>() .SetFetchMode("ChildCollection1", FetchMode.Eager) .SetFetchMode("ChildCollection2", FetchMode.Eager) .Add(Restrictions.IdEq(id)); criteria.SetResultTransformer(Transformers.DistinctRootEntity); return criteria.UniqueResult<MyRootType>();

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • How to optimize this JSON/JQuery/Javascript function in IE7/IE8?

    - by melaos
    hi guys, i'm using this function to parse this json data but i find the function to be really slow in IE7 and slightly slow in IE8. basically the first listbox generate the main product list, and upon selection of the main list, it will populate the second list. this is my data: [{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":18094,"ProductName":"Sound Blaster Wireless Receiver","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16185,"ProductName":"Xdock Wireless","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16186,"ProductName":"Xmod Wireless","ProductServiceLifeId":1}] and these are the functions that i'm using: //Three Product Panes function function populateMainPane() { $.getJSON('/Home/ThreePaneProductData/', function(data) { products = data; alert(JSON.stringify(products)); var prodCategory = {}; for (i = 0; i < products.length; i++) { prodCategory[products[i].ProductCategoryId] = products[i].ProductCategoryName; } //end for //take only unique product category to be used var id = 0; for (id in prodCategory) { if (prodCategory.hasOwnProperty(id)) { $(".LBox1").append("<option value='" + id + "'>" + prodCategory[id] + "</option>"); //alert(prodCategory[id]); } } var url = document.location.href; var parms = url.substring(url.indexOf("?") + 1).split("&"); for (var i = 0; i < parms.length; i++) { var parm = parms[i].split("="); if (parm[0].toLowerCase() == "pid") { $(".PanelProductReg").show(); var nProductIds = parm[1].split(","); for (var k = 0; k < nProductIds.length; k++) { var nProductId = parseInt(nProductIds[k], 10); for (var j = 0; j < products.length; j++) { if (nProductId == parseInt(products[j].ProductId, 10)) { addProductRow(nProductId, products[j].ProductName); j = products.length; } } //end for } } } }); } //end function function populateSubCategoryPane() { var subCategory = {}; for (var i = 0; i < products.length; i++) { if (products[i].ProductCategoryId == $('.LBox1').val()) subCategory[products[i].ProductSubCategoryId] = products[i].ProductSubCategoryName; } //end for //clear off the list box first $(".LBox2").html(""); var id = 0; for (id in subCategory) { if (subCategory.hasOwnProperty(id)) { $(".LBox2").append("<option value='" + id + "'>" + subCategory[id] + "</option>"); //alert(prodCategory[id]); } } } //end function is there anything i can do to optimize this or is this a known browser issue?

    Read the article

  • How to get image's coordinate on JPanel

    - by Jessy
    This question is related to my previous question http://stackoverflow.com/questions/2376027/how-to-generate-cartesian-coordinate-x-y-from-gridbaglayout I have successfully get the coordinate of each pictures, however when I checked the coordinate through (System.out.println) and the placement of the images on the screen, it seems to be wrong. e.g. if on the screen it was obvious that the x point of the first picture is on cell 2 which is on coordinate of 20, but the program shows x=1. Here is part of the code: public Grid (){ setPreferredSize(new Dimension(600,600)); .... setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); gc.weightx = 1d; gc.weighty = 1d; gc.insets = new Insets(0, 0, 0, 0);//top, left, bottom, and right gc.fill = GridBagConstraints.BOTH; JLabel[][] label = new JLabel[ROWS][COLS]; Random rand = new Random(); // fill the panel with labels for (int i=0;i<IMAGES;i++){ ImageIcon icon = createImageIcon("myPics.jpg"); int r, c; do{ //pick random cell which is empty r = (int)Math.floor(Math.random() * ROWS); c = (int)Math.floor(Math.random() * COLS); } while (label[r][c]!=null); //randomly scale the images int x = rand.nextInt(50)+30; int y = rand.nextInt(50)+30; Image image = icon.getImage().getScaledInstance(x,y, Image.SCALE_SMOOTH); icon.setImage(image); JLabel lbl = new JLabel(icon); // Instantiate GUI components gc.gridx = r; gc.gridy = c; add(lbl, gc); //add(component, constraintObj); label[r][c] = lbl; } I checked the coordinate through this code: Component[] components = getComponents(); for (Component component : components) { System.out.println(component.getBounds()); }

    Read the article

  • NHibernate CreateSqlQuery and object graph

    - by magellings
    Hello I'm a newbie to NHibernate. I'd like to make one sql query to the database using joins to my three tables. I have an Application with many Roles with many Users. I'm trying to get NHibernate to properly form the object graph starting with the Application object. For example, if I have 10 application records, I want 10 application objects and then those objects have their roles which have their users. What I'm getting however resembles a Cartesian product in which I have as many Application objects as total User records. I've looked into this quite a bit and am not sure if it is possible to form the application hierarchy correctly. I can only get the flattened objects to come back. It seems "maybe" possible as in my research I've read about "grouped joins" and "hierarchical output" with an upcoming LINQ to NHibernate release. Again though I'm a newbie. [Update Based on Frans comment in Ayende's post here I'm guessing what I want to do is not possible http://ayende.com/Blog/archive/2008/12/01/solving-the-select-n1-problem.aspx ] Thanks for you time in advance. Session.CreateSQLQuery(@"SELECT a.ID, a.InternalName, r.ID, r.ApplicationID, r.Name, u.UserID, u.RoleID FROM dbo.[Application] a JOIN dbo.[Roles] r ON a.ID = r.ApplicationID JOIN dbo.[UserRoleXRef] u ON u.RoleID = r.ID") .AddEntity("app", typeof(RightsBasedSecurityApplication)) .AddJoin("role", "app.Roles") .AddJoin("user", "role.RightsUsers") .List<RightsBasedSecurityApplication>().AsQueryable();

    Read the article

  • How to include the login form on the Home index page in MVC

    - by Bernard Larouche
    Hi guys I really need your help for this. I am relatively new to programming and I need help to something that could be easy for a experienced programmer. I would like to get the login form that we get for free in an MVC application on the left sidebar of my Home index page instead of the usual Account/Login page. I am facing some problems. First I need a product object to be displayed on my Home Index page as well. What I did is that I added a product object to the LogOnModel that they provide in the AccountModels class and I created a UserControl (partial view) copying the content of the LogOn.aspx view. Now my Home index.aspx as well as my partial view inherits the LogOnModel class. I can see the Login form on my Home Index page as well as my product object BUT the login Form is never empty. The last username and password always appear there. I know I must have forgotten something or have done something wrong or the way did it is completely wrong !! Please could you give me some advice Thks <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<CoderForTradersSite.Models.LogOnModel>" %> <h4>Login Form</h4> <p> Please enter your username and password. <%= Html.ActionLink("Register", "Register") %> if you don't have an account. </p> <% using (Html.BeginForm()) { %> <%= Html.ValidationSummary(true, "Login was unsuccessful. Please correct the errors and try again.") %> <div> <fieldset> <legend>Account Information</legend> <div class="editor-label"> <%= Html.LabelFor(m => m.UserName) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(m => m.UserName) %> <%= Html.ValidationMessageFor(m => m.UserName) %> </div> <div class="editor-label"> <%= Html.LabelFor(m => m.Password) %> </div> <div class="editor-field"> <%= Html.PasswordFor(m => m.Password) %> <%= Html.ValidationMessageFor(m => m.Password) %> </div> <div class="editor-label"> <%= Html.CheckBoxFor(m => m.RememberMe) %> <%= Html.LabelFor(m => m.RememberMe) %> </div> <p> <input type="submit" value="Log On" /> </p> </fieldset> </div> <% } %>

    Read the article

  • A ResetBindings on a BindingSource of a Grid also resets ComboBox

    - by Tetsuo
    Hi there, I have a DataGridView with a BindingSource of products. This products have an enum (Producer). For the most text fields (to edit the product) below the DataGridView I have a method RefreshProduct which does a ResetBindings in the end to refresh the DataGridView. There is a ComboBox (cboProducer), too. If I run over the _orderBs.ResetBindings(false) it will reset my cboProducer outside the DataGridView, too. Could you please help me to avoid this? Here follows some code; maybe it is then better to understand. public partial class SelectProducts : UserControl { private AutoCompleteStringCollection _productCollection; private ProductBL _productBL; private OrderBL _orderBL; private SortableBindingList<ProductBE> _listProducts; private ProductBE _selectedProduct; private OrderBE _order; BindingSource _orderBs = new BindingSource(); public SelectProducts() { InitializeComponent(); if (_productBL == null) _productBL = new ProductBL(); if (_orderBL == null) _orderBL = new OrderBL(); if (_productCollection == null) _productCollection = new AutoCompleteStringCollection(); if (_order == null) _order = new OrderBE(); if (_listProducts == null) { _listProducts = _order.ProductList; _orderBs.DataSource = _order; grdOrder.DataSource = _orderBs; grdOrder.DataMember = "ProductList"; } } private void cmdGetProduct_Click(object sender, EventArgs e) { ProductBE product = _productBL.Load(txtProductNumber.Text); _listProducts.Add(product); _orderBs.ResetBindings(false); } private void grdOrder_SelectionChanged(object sender, EventArgs e) { if (grdOrder.SelectedRows.Count > 0) { _selectedProduct = (ProductBE)((DataGridView)(sender)).CurrentRow.DataBoundItem; if (_selectedProduct != null) { txtArticleNumber.Text = _selectedProduct.Article; txtPrice.Text = _selectedProduct.Price.ToString("C"); txtProducerNew.Text = _selectedProduct.ProducerText; cboProducer.DataSource = Enum.GetValues(typeof(Producer)); cboProducer.SelectedItem = _selectedProduct.Producer; } } } private void txtProducerNew_Leave(object sender, EventArgs e) { string property = CommonMethods.GetPropertyName(() => new ProductBE().ProducerText); RefreshProduct(((TextBoxBase)sender).Text, property); } private void RefreshProduct(object value, string property) { if (_selectedProduct != null) { double valueOfDouble; if (double.TryParse(value.ToString(), out valueOfDouble)) { value = valueOfDouble; } Type type = _selectedProduct.GetType(); PropertyInfo info = type.GetProperty(property); if (info.PropertyType.BaseType == typeof(Enum)) { value = Enum.Parse(info.PropertyType, value.ToString()); } try { Convert.ChangeType(value, info.PropertyType, new CultureInfo("de-DE")); info.SetValue(_selectedProduct, value, null); } catch (Exception ex) { throw new WrongFormatException("\"" + value.ToString() + "\" is not a valid value.", ex); } var produktFromList = _listProducts.Single(p => p.Position == _selectedProduct.Position); info.SetValue(produktFromList, value, null); _orderBs.ResetBindings(false); } } private void cboProducer_SelectedIndexChanged(object sender, EventArgs e) { var selectedIndex = ((ComboBox)(sender)).SelectedIndex; switch ((Producer)selectedIndex) { case Producer.ABC: txtProducerNew.Text = Constants.ABC; break; case Producer.DEF: txtProducerNew.Text = Constants.DEF; break; case Producer.GHI: txtProducerNew.Text = Constants.GHI; break; case Producer.Another: txtProducerNew.Text = String.Empty; break; default: break; } string property = CommonMethods.GetPropertyName(() => new ProductBE().Producer); RefreshProduct(selectedIndex, property); } }

    Read the article

  • Advice on Factory Method

    - by heath
    Using php 5.2, I'm trying to use a factory to return a service to the controller. My request uri would be of the format www.mydomain.com/service/method/param1/param2/etc. My controller would then call a service factory using the token sent in the uri. From what I've seen, there are two main routes I could go with my factory. Single method: class ServiceFactory { public static function getInstance($token) { switch($token) { case 'location': return new StaticPageTemplateService('location'); break; case 'product': return new DynamicPageTemplateService('product'); break; case 'user' return new UserService(); break; default: return new StaticPageTemplateService($token); } } } or multiple methods: class ServiceFactory { public static function getLocationService() { return new StaticPageTemplateService('location'); } public static function getProductService() { return new DynamicPageTemplateService('product'); } public static function getUserService() { return new UserService(); } public static function getDefaultService($token) { return new StaticPageTemplateService($token); } } So, given this, I will have a handful of generic services in which I will pass that token (for example, StaticPageTemplateService and DynamicPageTemplateService) that will probably implement another factory method just like this to grab templates, domain objects, etc. And some that will be specific services (for example, UserService) which will be 1:1 to that token and not reused. So, this seems to be an ok approach (please give suggestions if it is not) for a small amount of services. But what about when, over time and my site grows, I end up with 100s of possibilities. This no longer seems like a good approach. Am I just way off to begin with or is there another design pattern that would be a better fit? Thanks. UPDATE: @JSprang - the token is actually sent in the uri like mydomain.com/location would want a service specific to loction and mydomain.com/news would want a service specific to news. Now, for a lot of these, the service will be generic. For instance, a lot of pages will call a StaticTemplatePageService in which the token is passed in to the service. That service in turn will grab the "location" template or "links" template and just spit it back out. Some will need DynamicTemplatePageService in which the token gets passed in, like "news" and that service will grab a NewsDomainObject, determine how to present it and spit that back out. Others, like "user" will be specific to a UserService in which it will have methods like Login, Logout, etc. So basically, the token will be used to determine which service is needed AND if it is generic service, that token will be passed to that service. Maybe token isn't the correct terminology but I hope you get the purpose. I wanted to use the factory so I can easily swap out which Service I need in case my needs change. I just worry that after the site grows larger (both pages and functionality) that the factory will become rather bloated. But I'm starting to feel like I just can't get away from storing the mappings in an array (like Stephen's solution). That just doesn't feel OOP to me and I was hoping to find something more elegant.

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • Matlab matrix translation and rotation multiple times

    - by pinnacler
    I have a map of individual trees from a forest stored as x,y points in a matrix. I call it fixedPositions. It's cartesian and (0,0) is the origin. I would like 0/360 degrees to be the top of the screen and 90 degrees to be to the right. Given a velocity and a heading, i.e. .5 m/s and 60 degrees (2 o'clock equivalent on a watch), how do I rotate that x,y points, so that the new origin is centered at (.5cos(60),.5sin(60)) and 60 degrees is now at the top of the screen? Then if I were to give you another heading and speed, i.e. 0 degrees and 2m/s, it should calculate it from the last point, not the original fixedPositions origin. I've wasted my day trying to figure this out. I wish I took matrix algebra but I'm at a loss. I tried doing cos(30) and even those wouldn't compute correctly, which after an hour I realize were in radians.

    Read the article

  • How to refresh ListAdapter/ListView in android

    - by user2463990
    I have database with 2 table, custom layout for my listView, and I'm using ListAdapter to display all data on ListView - this works fine. But, I have problem when I wish display something other on listView from my Database. The data is just append on my ListView - I won't this! How I refresh/update ListAdapter? This is my MainActivity: ListAdapter adapter; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); search = (EditText) findViewById(R.id.search); lista = (ListView) findViewById(android.R.id.list); sqlite = new Sqlite(MainActivity.this); //When apps start, listView is populated with data adapter = new SimpleAdapter(this, sqlite.getAllData(), R.layout.lv_layout, new String[]{"ime","naziv","kolicina","adresa"}, new int[]R.id.kupac,R.id.proizvod,R.id.kolicina,R.id.adresa}); setListAdapter(adapter); search.addTextChangedListener(new TextWatcher() { @Override public void onTextChanged(CharSequence s, int start, int before,int count) { // TODO Auto-generated method stub String tekst = s.toString(); ArrayList<HashMap<String, String>> rez = Sqlite.getFilterData(tekst); adapter = new SimpleAdapter(MainActivity.this, rez,R.layout.lv_layout, new String[]{"ime","naziv","kolicina","adresa"}, new int[]{R.id.kupac,R.id.proizvod,R.id.kolicina,R.id.adresa}); lista.setAdapter(adapter); } } The problem is when the method onTextChanged is called. I get all the data but the data just append to my ListView. How to fix it? And this is my Sqlite class where is needed method: ArrayList<HashMap<String,String>> results_criteria = new ArrayList<HashMap<String,String>>(); public ArrayList<HashMap<String, String>> getFilterData(String criteria) { String ime_kupca = ""; String product = ""; String adress = ""; String kolicina = ""; SQLiteDatabase myDb = this.getReadableDatabase(); Cursor cursor = myDb.rawQuery("SELECT " + DB_COLUMN_IME + "," + DB_COLUMN_NAZIV + "," + DB_COLUMN_KOLICINA + "," + DB_COLUMN_ADRESA + " FROM " + DB_TABLE1_NAME + "," + DB_TABLE2_NAME + " WHERE kupac.ID = proizvod.ID AND " + DB_COLUMN_IME + " LIKE '%" + criteria + "%'", null); if(cursor != null){ if(cursor.moveToFirst()){ do{ HashMap<String,String>map = new HashMap<String,String>(); ime_kupca = cursor.getString(cursor.getColumnIndex(DB_COLUMN_IME)); product = cursor.getString(cursor.getColumnIndex(DB_COLUMN_NAZIV)); kolicina = cursor.getString(cursor.getColumnIndex(DB_COLUMN_KOLICINA)); adress = cursor.getString(cursor.getColumnIndex(DB_COLUMN_ADRESA)); map.put("ime", ime_kupca); map.put("naziv", product); map.put("kolicina", kolicina); map.put("adresa", adress); results_criteria.add(map); }while(cursor.moveToNext()); //cursor.close(); } cursor.close(); } Log.i("Rez", "" + results_criteria); return results_criteria;

    Read the article

  • How to fix this simple SQL query?

    - by morpheous
    I have a database with three tables: user_table country_table city_table I want to write ANSI SQL which will allow me to fetch all the user data (i.e. user details including the name of the country of the last school and the name of the city they live in now). The problem I am having is that I have to use a self join, and I am getting slightly confused. The schema is shown below: CREATE TABLE user_table (id int, first_name varchar(16), last_school_country_id int, city_id int); CREATE TABLE country_table (id int, name varchar(32)); CREATE TABLE city_table (id int, country_id int, name varchar(32)); This is the query I have come up with so far, but the results are wrong, and sometimes, the db engine (mySQL), asks me if I want to show all [HUGE NUMBER HERE] results - which makes me suspect that I am unintentionally creating a cartesian product somewhere. Can someone explain what is wrong with this SQL statement, and what I need to do to fix it? SELECT usr.id AS id, usr.first_name, ctry1.name as loc_country_name, ctry2.name as school_country_name, city.name as loc_city_name FROM user_table usr, country_table ctry1, country_table ctry2, city_table city WHERE usr.last_school_country_id=ctry2.id AND usr.city_id=city.id AND city.country_id=ctry1.id AND ctry1.id=ctry2.id;

    Read the article

  • Uniform grid Rows and Columns

    - by Carlo
    I'm trying to make a grid based on a UniformGrid to show the coordinates of each cell, and I want to show the values on the X and Y axes like so: _A_ _B_ _C_ _D_ 1 |___|___|___|___| 2 |___|___|___|___| 3 |___|___|___|___| 4 |___|___|___|___| Anyway, in order to do that I need to know the number of columns and rows in the Uniform grid, and I tried overriding the 3 most basic methods where the arrangement / drawing happens, but the columns and rows in there are 0, even though I have some controls in my grid. What method can I override so my Cartesian grid knows how many columns and rows it has? C#: public class CartesianGrid : UniformGrid { protected override Size MeasureOverride(Size constraint) { Size size = base.MeasureOverride(constraint); int computedColumns = this.Columns; // always 0 int computedRows = this.Rows; // always 0 return size; } protected override Size ArrangeOverride(Size arrangeSize) { Size size = base.ArrangeOverride(arrangeSize); int computedColumns = this.Columns; // always 0 int computedRows = this.Rows; // always 0 return size; } protected override void OnRender(DrawingContext dc) { int computedColumns = this.Columns; // always 0 int computedRows = this.Rows; // always 0 base.OnRender(dc); } } XAML: <local:CartesianGrid> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> <Label Content="Hello" /> </local:CartesianGrid> Any help is greatly appreciated. Thanks!

    Read the article

  • Vector deltas and moving in unknown areas

    - by dekz
    Hi All, I was in need of a little math help that I can't seem to find the answer to, any links to documentation would be greatly appreciated. Heres my situation, I have no idea where I am in this maze, but I need to move around and find my way back to the start. I was thinking of implementing a waypoint list of places i've been offset from my start at 0,0. This is a 2D cartesian plane. I've been given 2 properties, my translation speed from 0-1 and my rotation speed from -1 to 1. -1 is very left and +1 is very right. These are speed and not angles so thats where my problem lies. If I'm given 0 as a translation speed and 0.2 I will continually turn to my right at a slow speed. How do I figure out the offsets given these 2 variables? I can store it every time I take a 'step'. I just need to figure out the offsets in x and y terms given the translations and rotation speeds. And the rotation to get to those points. Any help is appreciated.

    Read the article

  • Infinite sharing system (PHP/MySQLi)

    - by Toine Lille
    I'm working on a discount system for whichever customer shares a product and brings in new customers. Each unique visit = $0.05 off, each new customer = $0.50 off (it's a cheap product so yeah, no big numbers). When a new customer shares the site, the customer initially responsible for the new customer (if any) will get half of the new customer's discount as well. The initial customer would get a fourth for the next level and the new customer half of that, etc, creating a tree or pyramid that way that could be infinite. Initial customer ($1.35 discount: 2 new+3 visits + half of 1 new+2 visits) Visitor ($0) Visitor ($0) New customer ($0.60) Visitor ($0) Visitor ($0) Newer customer ($0) New customer ($0) Visitor ($0) The customers are saved along with their IP addresses (bin2hex(inet_pton)) in a database table (customers) with info like a unique id, e-mail address and first date/time the purchased a product (= time of registration). The shares are saved in a separate table within the same database (sharing). Each unique IP addresses that visits the site creates a new row featuring the IP address (also saved as bin2hex(inet_pton)), the id of the customer who shared it and the date/time of the visit. Sharing goes via URL, featuring a GET element containing the customer's id. Visits and new customers overlap, as visits will always occur before the new customer does. That's fine. The date/times are used just to make it a little more secure (I also use the IP along with cookies to see if people cheat the system). If an IP is already in the sharing or customer tables, it does not count and will not create a new entry. Now the problem is, how to make the infinity happen and apply the different values to it? That's all I'd need to know. It needs to calculate the discount for each customer separately, but also allow for monitoring altogether (though that's just a matter of passing all ID's through it). I figured I'd start (after the database connection) with $stmt = $con->prepare('SELECT ip,datetime FROM sharing WHERE sender=?'); $stmt->bind_param('i',$customerid); $stmt->execute(); $stmt->store_result(); $discount = $discount + ($stmt->num_rows * 0.05); $stmt->bind_result($ip,$timeofsharing); to translate all the visits to $0.05 of discount each. To check for the new customers that came from these visits, I wrote the following: while ($sql->fetch()) { $stmt2 = $con->prepare("SELECT datetime FROM users WHERE ip=?"); $stmt2->bind_param('s',$ip); $stmt2->execute(); $stmt2->store_result(); $stmt2->bind_result($timeofpurchase); Followed by a little more security comparing the datetimes: while ($stmt2->fetch()) { if (strtotime($timeofpurchase) < strtotime($timeofsharing)) { $discount = $discount + $0.50; } But this is just for the initial customer's direct results. If I'd want to check for the next level, I'd basically have to put the exact same check and loop in itself, checking each new customer the initial customer they brought to the site, and then for the next level again to check all of the newer customers, etc, etc. What to do? / Where to go? / What would be the correct practice for this? Thanks!

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >