Search Results

Search found 11246 results on 450 pages for 'power supply unit'.

Page 31/450 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • .net mvc2 custom HtmlHelper extension unit testing

    - by alex
    My goal is to be able to unit test some custom HtmlHelper extensions - which use RenderPartial internally. http://ox.no/posts/mocking-htmlhelper-in-asp-net-mvc-2-and-3-using-moq I've tried using the method above to mock the HtmlHelper. However, I'm running into Null value exceptions. "Parameter name: view" Anyone have any idea?? Thanks. Below are the ideas of the code: [TestMethod] public void TestMethod1() { var helper = CreateHtmlHelper(new ViewDataDictionary()); helper.RenderPartial("Test"); // supposingly this line is within a method to be tested Assert.AreEqual("test", helper.ViewContext.Writer.ToString()); } public static HtmlHelper CreateHtmlHelper(ViewDataDictionary vd) { Mock<ViewContext> mockViewContext = new Mock<ViewContext>( new ControllerContext( new Mock<HttpContextBase>().Object, new RouteData(), new Mock<ControllerBase>().Object), new Mock<IView>().Object, vd, new TempDataDictionary(), new StringWriter()); var mockViewDataContainer = new Mock<IViewDataContainer>(); mockViewDataContainer.Setup(v => v.ViewData) .Returns(vd); return new HtmlHelper(mockViewContext.Object, mockViewDataContainer.Object); }

    Read the article

  • Unit test approach for generic classes/methods

    - by Greg
    Hi, What's the recommended way to cover off unit testing of generic classes/methods? For example (referring to my example code below). Would it be a case of have 2 or 3 times the tests to cover testing the methods with a few different types of TKey, TNode classes? Or is just one class enough? public class TopologyBase<TKey, TNode, TRelationship> where TNode : NodeBase<TKey>, new() where TRelationship : RelationshipBase<TKey>, new() { // Properties public Dictionary<TKey, NodeBase<TKey>> Nodes { get; private set; } public List<RelationshipBase<TKey>> Relationships { get; private set; } // Constructors protected TopologyBase() { Nodes = new Dictionary<TKey, NodeBase<TKey>>(); Relationships = new List<RelationshipBase<TKey>>(); } // Methods public TNode CreateNode(TKey key) { var node = new TNode {Key = key}; Nodes.Add(node.Key, node); return node; } public void CreateRelationship(NodeBase<TKey> parent, NodeBase<TKey> child) { . . .

    Read the article

  • Unit-testing a directive with isolated scope and bidirectional value

    - by unludo
    I want to unit test a directive which looks like this: angular.module('myApp', []) .directive('myTest', function () { return { restrict: 'E', scope: { message: '='}, replace: true, template: '<div ng-if="message"><p>{{message}}</p></div>', link: function (scope, element, attrs) { } }; }); Here is my failing test: describe('myTest directive:', function () { var scope, compile, validHTML; validHTML = '<my-test message="message"></my-test>'; beforeEach(module('myApp')); beforeEach(inject(function($compile, $rootScope){ scope = $rootScope.$new(); compile = $compile; })); function create() { var elem, compiledElem; elem = angular.element(validHTML); compiledElem = compile(elem)(scope); scope.$digest(); return compiledElem; } it('should have a scope on root element', function () { scope.message = 'not empty'; var el = create(); console.log(el.text()); expect(el.text()).toBeDefined(); expect(el.text()).not.toBe(''); }); }); Can you spot why it's failing? The corresponding jsFiddle Thanks :)

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • Unit Testing - Am I doing it right?

    - by baron
    Hi everyone, Basically I have been programing for a little while and after finishing my last project can fully understand how much easier it would have been if I'd have done TDD. I guess I'm still not doing it strictly as I am still writing code then writing a test for it, I don't quite get how the test becomes before the code if you don't know what structures and how your storing data etc... but anyway... Kind of hard to explain but basically lets say for example I have a Fruit objects with properties like id, color and cost. (All stored in textfile ignore completely any database logic etc) FruitID FruitName FruitColor FruitCost 1 Apple Red 1.2 2 Apple Green 1.4 3 Apple HalfHalf 1.5 This is all just for example. But lets say I have this is a collection of Fruit (it's a List<Fruit>) objects in this structure. And my logic will say to reorder the fruitids in the collection if a fruit is deleted (this is just how the solution needs to be). E.g. if 1 is deleted, object 2 takes fruit id 1, object 3 takes fruit id2. Now I want to test the code ive written which does the reordering, etc. How can I set this up to do the test? Here is where I've got so far. Basically I have fruitManager class with all the methods, like deletefruit, etc. It has the list usually but Ive changed hte method to test it so that it accepts a list, and the info on the fruit to delete, then returns the list. Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea? and then I test deleting different valued objects / datasets to ensure method is working properly. [Test] public void DeleteFruit() { var fruitList = CreateFruitList(); var fm = new FruitManager(); var resultList = fm.DeleteFruitTest("Apple", 2, fruitList); //Assert that fruitobject with x properties is not in list ? how } private static List<Fruit> CreateFruitList() { //Build test data var f01 = new Fruit {Name = "Apple",Id = 1, etc...}; var f02 = new Fruit {Name = "Apple",Id = 2, etc...}; var f03 = new Fruit {Name = "Apple",Id = 3, etc...}; var fruitList = new List<Fruit> {f01, f02, f03}; return fruitList; }

    Read the article

  • Java unit test coverage numbers do not match.

    - by Dan
    Below is a class I have written in a web application I am building using Java Google App Engine. I have written Unit Tests using TestNG and all the tests pass. I then run EclEmma in Eclipse to see the test coverage on my code. All the functions show 100% coverage but the file as a whole is showing about 27% coverage. Where is the 73% uncovered code coming from? Can anyone help me understand how EclEmma works and why I am getting the discrepancy in numbers? package com.skaxo.sports.models; import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType= IdentityType.APPLICATION) public class Account { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private String userId; @Persistent private String firstName; @Persistent private String lastName; @Persistent private String email; @Persistent private boolean termsOfService; @Persistent private boolean systemEmails; public Account() {} public Account(String firstName, String lastName, String email) { super(); this.firstName = firstName; this.lastName = lastName; this.email = email; } public Account(String userId) { super(); this.userId = userId; } public void setId(Long id) { this.id = id; } public Long getId() { return id; } public String getUserId() { return userId; } public void setUserId(String userId) { this.userId = userId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public boolean acceptedTermsOfService() { return termsOfService; } public void setTermsOfService(boolean termsOfService) { this.termsOfService = termsOfService; } public boolean acceptedSystemEmails() { return systemEmails; } public void setSystemEmails(boolean systemEmails) { this.systemEmails = systemEmails; } } Below is the test code for the above class. package com.skaxo.sports.models; import static org.testng.Assert.assertEquals; import static org.testng.Assert.assertNotNull; import static org.testng.Assert.assertTrue; import static org.testng.Assert.assertFalse; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class AccountTest { @Test public void testId() { Account a = new Account(); a.setId(1L); assertEquals((Long) 1L, a.getId(), "ID"); a.setId(3L); assertNotNull(a.getId(), "The ID is set to null."); } @Test public void testUserId() { Account a = new Account(); a.setUserId("123456ABC"); assertEquals(a.getUserId(), "123456ABC", "User ID incorrect."); a = new Account("123456ABC"); assertEquals(a.getUserId(), "123456ABC", "User ID incorrect."); } @Test public void testFirstName() { Account a = new Account("Test", "User", "[email protected]"); assertEquals(a.getFirstName(), "Test", "User first name not equal to 'Test'."); a.setFirstName("John"); assertEquals(a.getFirstName(), "John", "User first name not equal to 'John'."); } @Test public void testLastName() { Account a = new Account("Test", "User", "[email protected]"); assertEquals(a.getLastName(), "User", "User last name not equal to 'User'."); a.setLastName("Doe"); assertEquals(a.getLastName(), "Doe", "User last name not equal to 'Doe'."); } @Test public void testEmail() { Account a = new Account("Test", "User", "[email protected]"); assertEquals(a.getEmail(), "[email protected]", "User email not equal to '[email protected]'."); a.setEmail("[email protected]"); assertEquals(a.getEmail(), "[email protected]", "User email not equal to '[email protected]'."); } @Test public void testAcceptedTermsOfService() { Account a = new Account(); a.setTermsOfService(true); assertTrue(a.acceptedTermsOfService(), "Accepted Terms of Service not true."); a.setTermsOfService(false); assertFalse(a.acceptedTermsOfService(), "Accepted Terms of Service not false."); } @Test public void testAcceptedSystemEmails() { Account a = new Account(); a.setSystemEmails(true); assertTrue(a.acceptedSystemEmails(), "System Emails is not true."); a.setSystemEmails(false); assertFalse(a.acceptedSystemEmails(), "System Emails is not false."); } }

    Read the article

  • Oracle SCM at APICS Denver Oct 14-16

    - by Stephen Slade
    Join us in Denver, October 14–16, 2012, for the 2012 APICS International Conference & Expo. One of the world's largest gatherings of supply chain and operations management professionals, APICS provides an annual interactive learning environment for operations and supply chain professionals to lead and apply best practices. For those of you considering attending APICS  next month, be sure to keep Oracle Supply Chain applications on your radar. Oracle will again have a prominent position at the annual global conference. Our product booth with have supply chain demonstrations for manufacturing, value chain planning, value chain execution and Agile product lifecycle management offerings. Stop by our booth to register for one of numerous prizes and awards and chat with one of our supply chain product experts. Oracle customers will be presenting at various sessions throughout the event.  One of the great stories to be shared is the SUN supply chain transformation. For those interested in moving costs down to the bottom line, this is the session you should attend. http://www.apics.org/sites/conference/2012/home

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • Visual Studio 2008 Unit test does not pick up code changes unless I build the entire solution

    - by Orion Edwards
    Here's the scenario: Change my code: Change my unit test for that code With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The visual studio "Output" window indicates that the code dll and the test dll both successfully build (in that order) The problem is however, that the unit test does not use the latest version of the dll which it has just built. Instead, it uses the previously built dll (which doesn't have the updated code in it), so the test fails. When adding a new method, this results in a MethodNotImplementedException, and when adding a class, it results in a TypeLoadException, both because the unit test thinks the new code is there, and it isn't!. If I'm just updating an existing method, then the test just fails due to incorrect results. I can 'work around' the problem by doing this Change my code: Change my unit test for that code Invoke VS2008's 'Build Solution' command With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The problem is that doing a full build solution (even though nothing has changed) takes upwards of 30 seconds, as I have approx 50 C# projects, and VS2008 is not smart enough to realize that only 2 of them need to be looked at. Having to wait 30 seconds just to change 1 line of code and re-run a unit test is abysmal. Is there anything I can do to fix this? None of my code is in the GAC or anything funny like that, it's just ordinary old dll's (buiding against .NET 3.5SP1 on a win7/64bit machine) Please help!

    Read the article

  • How do you tell that your unit tests are correct?

    - by Jacob Adams
    I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem? Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing. Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

    Read the article

  • On a dual-GPU laptop, is using the discrete GPU ever more power efficient?

    - by Mahmoud Al-Qudsi
    Given a laptop with a dual integrated/discrete GPU configuration, is it ever more power efficient to use the discrete GPU instead of the integrated? Obviously when writing an email or working on a spreadsheet, the integrated GPU will always use less power. But let's say you're doing something graphics-medium but not graphics-intensive/heavy - is there a point where it actually makes sense to fire up the discrete GPU, not for performance but for power-saving reasons? Off the top of my head, I can think of a scenario where the external GPU supports hardware decoding of a particular video codec - I'd imagine there is a "price point" where using the GPU saves more energy than decoding that fully in software would. But I think most GPUs, integrated or discrete, pretty much decode just the plain-Jane h264. But maybe there is something more complicated, perhaps if you're doing something like desktop/windowing animations or a flash animation on a website (not an embedded flash video) - maybe the discrete GPU will use enough less power to make up for switching to it? I guess this question can be summed up as to whether or not you can say beyond doubt that if you don't care for performance on a laptop with two GPUs, always use the integrated GPU for maximum battery life.

    Read the article

  • High Power Consumption and Wakeups on my Asus X54H with 12.04

    - by Marogian
    So I've been using powertop to try and reduce the power consumption on my laptop as I only seem to get about 3 hours of battery. From reading other threads on here it seems my power consumption and wakeups are strangely high, here's a summary: The battery reports a discharge rate of 10.2 W Summary: 651.8 wakeups/second, 0.0 GPU ops/second and 0.0 VFS ops/sec The things which stand out as odd: 1.31 W 4.0 ms/s 166.7 Interrupt PS/2 Touchpad / Keyboard / Mouse So more than 10% of my battery is being consumed by my touchpad/keyboard? That doesn't seem right. 548 mW 34.3 ms/s 45.9 Process compiz 5% from Compiz. Is this correct? 376 mW 1.8 ms/s 47.5 Interrupt [51] i915 298 mW 1.4 ms/s 37.7 Timer tick_sched_timer Another few percent from these things- not quite sure what they are. For reference I've installed Laptop Mode Tools, Jupiter (on power save), the CPU governor is definitely on powersave and brightness is on minimum. What else can I do/Any ideas? I've seen other posts on here reporting laptop battery lives of ~8 hours and power consumption of 4W rather than my 10W... Thanks!

    Read the article

  • Opposite method of math power adding numbers

    - by adopilot
    I have method for converting array of Booleans to integer. It looks like this class Program { public static int GivMeInt(bool[] outputs) { int data = 0; for (int i = 0; i < 8; i++) { data += ((outputs[i] == true) ? Convert.ToInt32(Math.Pow(2, i)) : 0); } return data; } static void Main(string[] args) { bool[] outputs = new bool[8]; outputs[0] = false; outputs[1] = true; outputs[2] = false; outputs[3] = true; outputs[4] = false; outputs[5] = false; outputs[6] = false; outputs[7] = false; int data = GivMeInt(outputs); Console.WriteLine(data); Console.ReadKey(); } } Now I want to make opposite method returning array of Booleans values As I am short with knowledge of .NET and C# until now I have only my mind hardcoding of switch statement or if conditions for every possible int value. public static bool[] GiveMeBool(int data) { bool[] outputs = new bool[8]; if (data == 0) { outputs[0] = false; outputs[1] = false; outputs[2] = false; outputs[3] = false; outputs[4] = false; outputs[5] = false; outputs[6] = false; outputs[7] = false; } //After thousand lines of coed if (data == 255) { outputs[0] = true; outputs[1] = true; outputs[2] = true; outputs[3] = true; outputs[4] = true; outputs[5] = true; outputs[6] = true; outputs[7] = true; } return outputs; } I know that there must be easier way.

    Read the article

  • Interview with Koen Aben, Supply Chain Director of WE Fashion

    - by user801960
    We recently spoke to Koen Aben, the Supply Chain Director of WE Fashion, who gave us some insight into how Oracle supported the international fashion retailer through the completion of a large scale integration project across its 340 European stores. Koen explains the reasoning behind the project which was to create a common retail foundation and to integrate and align working processes to drive insight and enable continued growth. It is always good to hear from someone of Koen’s experience who can articulate the benefits of partnering with the right company for such an extensive project as this. Koen explains that a crucial element of such a project is to unify business applications into a common platform, adding that for successful growth, retailers really need to achieve enterprise-wide alignment. At the start of the three year project, WE Fashion’s application platform was fragmented impacting the company’s ability to support sustained growth. In light of this, WE Fashion invested in its processes, systems, teams and partnerships to build the needed retail foundation. Now after successfully completing the project, the basis is in place to ensure that growth is unimpeded. In the video, Koen Aben highlights some of the factors necessary for the success of the project as: Having an understanding that the process of creating a growth platform for a company is a long journey Accepting that during a lengthy project such as this, there will be high and low points experienced within the project team and the business, but that the relationship with your partners is crucial to the success of the project. Having the correct team in place will prove to be the “lynch –pin” of any successful project Oracle supported Koen and his team in implementing this project, and is recognised for the role it played during this development in partnership with the company. On his experience with working with the Oracle team, Koen points out that in the critical situations, Oracle was there to ensure that the right people were in place whenever needed and this was key to ensuring the project’s success. Since Oracle is one of the few providers that can offer an enterprise-wide retail platform, our best practice approach is key to connecting interactions throughout the business to enable insight and optimise operations. This is a great example of a large scale international retail project, where the true success of its completion is reflected in how proud the company is about what has been achieved, and the fact that results are already being seen.

    Read the article

  • Your Supply Chain May Be At Risk for Counterfeiting!

    - by stephen.slade(at)oracle.com
    Competition has driven unscrupulous participants into the supply chain. And your supply chain is exposed along with your customers.   60 Minutes had a terrific segment on fraud in the supply chain. This is a 13 minute clip on our global supply chains and it pertains to us not only as supply chain professionals and participants in supply chain economics, but as consumers and users of the various products that enter our shopping cart.  It's a must see news clip worth sharing   http://www.cbsnews.com/video/watch/?id=7359537n&tag=related;photovideo If you take medicines, you'll want to see this. Dr. Sanjay Gupta participated in this 9 month review of 'bad-drugs' and reports for 60 Minutes on CBS.

    Read the article

  • Oracle HCM Cloud Customer Q&A with WAXIE Sanitary Supply

    - by HCM-Oracle
    At this year’s Oracle HCM User Group (OHUG) Global conference, we had the opportunity to sit down with Oracle HCM Cloud customers for a short Q&A. We got to hear about what brought them to the OHUG conference, some of the benefits they are receiving from their Oracle HCM Cloud solutions, and advice they would give other businesses looking to move to the cloud.  Below is a discussion we had with Melissa Halverson, Benefits & HRIS Manager at WAXIE Sanitary Supply.  Q: What made you attend the OHUG Global Conference this year? Halverson: The biggest reason is networking. It allows me to connect with others in the Oracle HCM Cloud community. I was able to speak at the HCM Cloud SIG (Special Interest Group) on the first day and share my experiences as well as hear the experiences of other Oracle HCM Cloud users. It also allows me to get face-time with key people within Oracle.  Q: What Oracle HCM solutions are you currently using? Halverson: Global HR, Benefits, Workforce Compensation, and Performance Management. Q: Do you plan to invest further in Oracle HCM? Halverson: Yes, we are interested in Time and Labor. We would also like to get Recruiting at some point in the future. Q: What would you say is the most significant benefit you’ve realized from your use of Oracle HCM solutions? Halverson: First and foremost would be process improvement. Before we had Oracle HCM Cloud we relied on a paper process where something as simple as an employee address change required changes to be made manually in 9 different systems. Obviously that was extremely inefficient, but also increased the likelihood of errors being made.  The other huge benefit we have seen was in making information visible to the people that need it. Prior to implementing Oracle HCM Cloud, it was very difficult for anyone to access and make use of the information in our systems. Now, we can provide this information to those who need it to make better decisions.  Q: What advice would you give an organization looking to move their HR systems to the cloud? Halverson: One thing I think many organizations don't spend enough time doing is thoroughly vetting their implementation partner. I believe you should be vetting your implementation partner as much as you did the system itself. Also, manpower is so important. Involve as large a team as possible because you don’t want to get stuck having too few bodies to help out. And set realistic time frames. Biting off more than you can chew will inevitably result in failure. Having a phased approach is always best rather than trying to do everything at once. Thanks for the tips Melissa. Enjoy the rest of the conference!

    Read the article

  • can I run C# built-in unit test in build machine?

    - by 5YrsLaterDBA
    can I run C# built-in unit test in build machine which doesn't have Visual Studio installed? We are thinking add unit test to our Visual Studio 2008 C# project. Our build machine doesn't have VS installed and we want to integrate the new unit test with our auto-build system. Is MSTest the executable to launch the Team Test unit test?

    Read the article

  • Which part of the computer needs all the power from the PSU?

    - by Xeoncross
    A couple years ago I was building a new Core 2 Quad system and after reading all the reviews was convinced that I would need at least a 400 watt power supply unit (PSU). I bought a 500W Antec EarthWatts However, last year I bought a Kill-A-Watt power meter to test some things around our house and found that my PC was only using 80W of power while idle! (C2Q, 4GB RAM, SATA HD, & DVD burner) Well, here I am building another computer with a 65watt Core 2 CPU in it and I'm wondering if I can skimp out this time and get a 300watt or so unit since my usage doesn't seem to be what everyone claims it is. I'm sure that the people in the reviews who exhausted 500watt PSU weren't lying - so what is it that uses all that? The high-end dual SLI video cards? Lots of SATA drives? Overclocking?

    Read the article

  • Can a power failure or forceful shutdown damage hardware?

    - by Vilx-
    In an unrelated Internet forum I got into a discussion about hardware damage from forceful shutdowns (holding the power button for 5 seconds) and power failures. I was in the opinion that normal PC hardware does not suffer from this - after all, it's not much different than what they experience under a standard shutdown. But another person thought that it could do physical harm to the hard drive and possibly other components as well. He also said that the journaling features of filesystems are useless in face of power failures and were intended to help mitigate damage from system crashes. Now... I think this is nonsense, but then again I lack the experience and knowledge to say it with certainty. Perhaps someone else is more knowledgeable in this area and can shed light on this burning issue? :)

    Read the article

  • Can a power failure or forceful shutdown damage hardware?

    - by Vilx-
    Can computer hardware suffer damage from forceful shutdowns (holding the power button for five (5) seconds) or power failures? I believe that normal PC hardware does not suffer from this - after all, it's not much different than what they experience under a standard shutdown. But elsewhere I've read tht another person thought that it could do physical harm to the hard drive and possibly other components as well. He also said that the journaling features of filesystems are useless in face of power failures and were intended to help mitigate damage from system crashes. I think this is nonsense, but then again I lack the experience and knowledge to say it with certainty.

    Read the article

  • How can I guess if a USB cable will power my devices?

    - by rsanchez
    I've had problems with one long (4 meter) USB Mini-B to USB Type-A cable not being able to boot a 2.5'' external hard disc due to not supplying enough current. On top of that, the cable used a Type-A to Mini-B adapter for the Mini-B part, which probably made things worse. Three different shorter cables I got around made the hard disk work without extra current, so it was definitively the cable's fault. However, if I plugged the hard disk to the power, and used the long cable just for data it worked. Here is some related information on powering through USB cables: http://www.girr.org/mac_stuff/usb_stuff.html I have not any long cables that don't have an intermediary Type-A to Mini-B adapter to try them out. My question is: is there a way to guess if a cable will provide enough power for charge/disk drive power? Is it related to the length of the cable, to the build quality of the cable, or the fact that uses intermediary adapters?

    Read the article

  • Power outage during disk wipe. What do I do now?

    - by Mark Trexler
    I was using Roadkil Diskwipe on an external hard drive and the power went out. I had removed it from any outlet connection by the time power was restored to prevent power-spike damage (it's on a surge protector, but I didn't want to rely on that). My question is, where do I go from here? Obviously I don't care about preserving any data currently on it, I just want to make sure the drive itself is not terminally damaged. I'm running chkdsk (full), but I don't know if that's the correct step to assessing any damage. If it makes any difference, the hard drive was unallocated at the time of the outage, as Diskwipe requires that for it to run. Also, could something like this cause latent problems with the drive itself (i.e. serious issues that I won't be aware of when testing it now). I'd appreciate any program recommendations if chkdsk is not the most appropriate diagnostic route. Thank you.

    Read the article

  • Suspend only works once after full power cycle with ASUS P7P55D-E Pro

    - by John Chadwick
    This one is strange. I can't seem to get suspend working more than once per power cycle. When I say "power cycle," I mean the only way to get one proper suspend is to cut power from the power supply and boot back up cold. After the proper suspend, I get a failed suspend, and after all reboots or cold boots until power is cut, suspends fail. I'm using an ASUS P7P55D-E Pro with a Sandy Bridge Core i7, running on Ubuntu Precise repositories and UEFI. I'm running Nouveau from repository (And Gallium3d compiled from git, but that does not come into this since I can avoid OpenGL and it still happens the same way) with a GTX 285 (nv50.) I had to build a custom kernel (3.3) in order for ACPI 5.0 to be supported and make suspend work at all. I compiled it using the latest Ubuntu kernel's config file with the additional entries set to the default options. All packages are up to date. I know these are relatively exotic settings, but I'm hoping maybe I can get some help anyways. The behavior when suspend fails is strange. Upon a proper suspend, all fans turn off and the only led left on, the power led, is blinking. Upon a failed suspend, 1. USB power remains. 2. The power led stays on solid. 3. All fans seem to still be on. 4. I can hear what I believe is the primary harddrive shutting off. 5. Despite USB power remaining, the USB powered keyboard does not respond to anything, and the indicator leds on it shut off. Pressing the power button does nothing, and of course I have not to date found a way to wake it up. When trouble shooting the first round of issues I got with suspend not too long ago, I ended up building a list of modules to disable upon sleeping. Here's my config file for them: In /etc/pm/config.d/01modules: SUSPEND_MODULES="uhci_hd ehci_hd button" All of my other pm configuration files are stock. In case it's any help, here are my relevant BIOS settings. Thanks.

    Read the article

  • Hard Disk Spins Down as long as Battery is in Laptop

    - by Brock Dute
    Hi, I just figured out today that as long as the battery is in my laptop, it doesn't matter if it's fully charged while plugged in, Ubuntu always spins down my hard drive. I noticed this because there was a huge difference in speed when I removed the batteries. My settings for power management is basically: on AC power, don't spin down harddrive, dont suspend or anything on battery power, basically save as much power as possible I assumed that if I plug in my laptop, it'll use the On AC Power settings no matter what but apparently, this isn't so. Is there a way to "fix" this?

    Read the article

  • Power button missing

    - by Christophe De Troyer
    An almost clean install of Ubuntu 13.10 keeps hiding the power button. I noticed before that my clock and power button were missing in the right top corner. I rebooted my system and they were back. Now I see my power button only is missing. How do I go about fixing this? I can't keep rebooting, can I? The latest thing I did was removing Ubuntu One: Removing Ubuntu One I don't think I did something that might've tinkered with these settings though.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >