Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 101/728 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • picking a linux compatable motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • raid 0 data recovery?

    - by Fred
    HI All, I have two identical seagate 7200.9 500Gb drives confiured as a RAID 0 spanned disk in windows. One of the drives has lost power and wont spin up at all. I know this normally means death for the data on both drives but i have a cunning plan.. DISK 1 - NO POWER RAID 0 DISK DISK 2 - FULLY FUNCTIONAL RAID 0 DISK DISK 3 - FULLY FUNCTIONAL SPARE DISK Copy the working drive (disk 2) data to a third 500GB DISK (disk 3), remove the logic board from the working disk (disk 2) and replace it with the non working logic board on the broken drive (disk 1) , then hopefully recreate the RAID 0 with disk 1 and disk 3, just long enough to get the data off it. Hope this makes sense, here are my questions: Windows disk manager atm recognises disk 2 but wont let me access it in anyway, therefore copying the data off it (or getting a disk image) cant be done in windows. Does anyone know of any software (in linux or self booting) that would allow me to access this disk? Anyone know of any software that will recreate the spanned drive off two disk images Am i missing any key information that means i definitely shouldn't even bother starting this, i know its a long shot anyway but its worth a try unless i definitely cant do it. The irritating thing is that i am sure its a logic board failure on disk 1 as it simply wont power up at all, suddenly no signs of life, so i am sure the data is intact! Any help would be really appreciated! Thanks

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Windows Server 2008 - RAID 5 Fails on Reboot

    - by Adam
    Hey, I've got an install of Windows Server 2008 Enterprise. It's running software RAID-5 with five disks. The disks were originally formatted under Windows Server 2003, but came up fine once I installed Windows Server 2008. The issue I'm having is that every time I reboot the server, the RAID comes up with a "Failed Redundancy" - the data stays available. I have 4 disks on a PCI SATA controller, and one of the disks connected to the motherboard's on-board SATA ports. (The other on-board port has the system disk connected.) I was having Disk #4 fail consistently, so I tried swapping the cables on the controller end. I swapped the on-board RAID disk with one on the PCI controller. Same issue now, expect with disk #1. Once the system's up, I can reactivate the RAID, it will resync for a while, then go to "Healthy", and will stay that way for an indefinite amount of time - until I reboot. As soon as I reboot, the disk drops again. I've ruled out disk + cable with the recabling. I don't believe it would be the controller as it seems to work fine most of the time - only failing on reboot, and the other port on the same controller connects the system disk - which is clearly working. I did look in the event log, but didn't see anything particularly relevant (although I didn't know what I was looking for - just looked for anything with a "Warning" or "Error" symbol that looked disk-related :)). I'm not particularly familiar with RAID on Windows, does anyone have any idea why this might be doing this? Any idea how to fix it? Any suggestions appreciated! -- Adam

    Read the article

  • Intel HD Graphics vs NVIDIA Quadro FX 380 PCI-E

    - by Michael
    I recently purchased an Acer Veriton which has an i5-650 processor, Windows 7 Pro (64 bit) and Intel HD Graphics listed as the video card. I also purchased a PNY nVIDIA Quadro FX 380 PCI-E card for improved picture and home video viewing and editing. I have already replaced the original 300 wattt power supply to a 430 watt Antec Truepower I had on hand and boosted the RAM to 8 gigs from the original 4. Question 1) Am I getting any improvement in visual quality or system speed with the Quadro or is it a waste of money and I should just save up to buy a bigger video card? This card was on sale for $115. If I am getting improvement then I need to ask another question. Question 2) Instructions for the Quadro installation are as follows... 1--Uninstall the existing VGA driver. -Remove the existing Display Driver via "Add or Remove Porgrams". -Shut down your computer. 2--Remove your Existing Graphics Board (or Disable Integrated 3D Graphics Controller). skipping instructions on how to remove existing graphics board -Systems with integrated (also know as on-board) 3D graphics may require you to disable the integrated 3D graphics system. Consult the owners or vendor manual that came with your PC on how to properly do this. So is the Intel HD Graphics considered a 3D graphics controller? If so should I just contact Acer or can anyone give me instructions? Thanks in advance for any help.

    Read the article

  • Installing a new Motherboard in a HP xw6200 case

    - by thing2k
    I have a HP xw6200 Workstation, that is rather long in the tooth, and with 2 physical CPUs, it is quite inefficient. So, the plan was to upgrade the internals. Nothing special: AMD Athlon II X4 640 ASUS M4A78KT-M LE (mATX) 2x 2GB DDR3 1333MHz RAM. 3p under my £150 budget The issues: The pin connector for the front panel isn't a good fit, but I can trim it to size. The PSU has a 8-Pin Power connector, unsurprisingly, the new board has a 4 pin socket. The pins do line up, but I would have to cut it in half to fit. Finally, due to the weight of heat-sinks, they are screwed directly into the case. It turns out that these screws also lock the motherboard in place. As to remove it, you remove the heat-sinks, slide the motherboard across and lift it out. I tested the new board for fit, and while it slots in fine, it's not secure. There is nowhere to screw the board down, it is just held in place with plastic standoffs. The only idea I had, was to wedging something between the side of the motherboard and part of the case. Any suggestions?

    Read the article

  • HDD dead forever???

    - by Roberto
    Yesterday I turned on my computer and it couldn't boot. I found out the hd (320GB SATA Seagate Momentus 7200.3 for notebook) was broken, it couldn't be recognized by the BIOS. I have another of the same hd, so I exchanged the boards. I found out that there is a problem on its board since my good hd didn't work. But the broken hd doesn't work with the good board as well: it can be recognized but when I insert a Windows Instalation DVD it says the hd is 0GB. I put it in a case and use it in another computer via USB, and but it doesn't show up in the "My Computer". I used a software to recover files called "GetDataBack for NTFS", it recognized the hd but with the wrong size (2TB). I try to make it read the hd but it get an I/O error reading sector. It tries to read, the hd spins... So, since I'm using a good board on it, the problem seems to be internal. Is there anything someone could do recover the files from it?

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Setting up SSO in ADF Security-enabled application

    - by Dmitry Nefedkin
    I'm continuing a series of post/videos regarding  the setting up ADF applications in the real world. This time I'm going to present how to set up Single Sign-On (SSO) and Single Logout (SLO) for ADF application using Oracle Access Manager 11g.  In this 40-min video we are going to explore the following topics: Review the demo environment; Install Oracle HTTP Server 11g (OHS) instance as a reverse proxy for Oracle Weblogic Server; Install OAM 11g Web Gate inside OHS; Modify and redeploy the ADF application for use with OAM; Configure OAM Identity Asserter in ADF domain; Configure single logout (SLO); Test SSO and SLO  

    Read the article

  • Using idle time in turn-based (RPG) games for updating

    - by The Communist Duck
    If you take any turn based RPG game there will be large periods of time when nothing is happening because the game is looping over 'wait_for_player_input'. Naturally it seems sensible to use this time to update things. However, this immediately seems to suggest that it would need to be threaded. Is this sort of design possible in a single thread? loop: if not check_something_pressed: update_a_very_small_amount else keep going But if we says 'a_very_small_amount' is only updating a single object each loop, it's going to be very slow at updating. How would you go about this, preferably in a single thread? EDIT: I've tagged this language-agnostic as that seems the sensible thing, though anything more specific to Python would be great. ;-)

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Multiple Object Instantiation

    - by Ricky Baby
    I am trying to get my head around object oriented programming as it pertains to web development (more specifically PHP). I understand inheritance and abstraction etc, and know all the "buzz-words" like encapsulation and single purpose and why I should be doing all this. But my knowledge is falling short with actually creating objects that relate to the data I have in my database, creating a single object that a representative of a single entity makes sense, but what are the best practises when creating 100, 1,000 or 10,000 objects of the same type. for instance, when trying to display a list of the items, ideally I would like to be consistent with the objects I use, but where exactly should I run the query/get the data to populate the object(s) as running 10,000 queries seems wasteful. As an example, say I have a database of cats, and I want a list of all black cats, do I need to set up a FactoryObject which grabs the data needed for each cat from my database, then passes that data into each individual CatObject and returns the results in a array/object - or should I pass each CatObject it's identifier and let it populate itself in a separate query.

    Read the article

  • Storing Attendance Data in database

    - by Ali Abbas
    So i have to store daily attendance of employees of my organisation from my application . The part where I need some help is, the efficient way to store attendance data. After some research and brain storming I came up with some approaches . Could you point me out which one is the best and any unobvious ill effects of the mentioned approaches. The approaches are as follows Create a single table for whole organisation and store empid,date,presentstatus as a row for every employee everyday. Create a single table for whole organisation and store a single row for each day with a comma delimited string of empids which are absent. I will generate the string on my application. Create different tables for each department and follow the 1 method. Please share your views and do mention any other good methods

    Read the article

  • The New My Oracle Support User Interface (HTML-based)

    - by user793553
    A single source for learning about the latest enhancements to the My Oracle Support User Interface... On January 27, 2012, we launched a new My Oracle Support HTML-based user interface (UI). The new user interface is built using Oracle’s Application Development Framework and is our first step towards providing a single online support portal for our customers and partners; one that all users will transition to in the coming months. Further enhancements to the HTML-based user interface are planned for April 13, 2012. We will transition users of the standard Flash-based interface in the coming months. To help facilitate a smooth transition, we invite you to preview and begin using the new My Oracle Support interface by going to supporthtml.oracle.com and sign in using your Single Sign-on username and password For full information regarding functionality, supported browsers and links to quick and easy videos on how to navigate the new UI, please check out Doc ID 1385682.1 

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • 12.04 x64 Server Login Failure

    - by ThoughtCoder
    After some serious GRUB issues after routine kernel updates, forcing a grub-reinstall via chroot and some single-user mode maintenance, I now cannot login to my server (except via single-user mode) When attempting to SSH to the server my connection is reset immediately after entering username - no password prompt is presented. I've plugged in my monitor and keyboard to the headless server and when trying to login I receive: login: "PAM Failure, aborting: Critical error - immediate abort" immediately after entering my username - again, no password prompt is displayed. I am able to gain access via kernel recovery mode and login as root through single-user mode with networking. I've attempted a dpkg reconfigure believing I may have had some incomplete/borked packages, but to no avail. Looking through /etc/pam.d/login doesn't seem to lead me in any obvious directions and I'm afraid I'm out of ideas. Googling doesn't help me much, one guy reinstalled (really don't want to do this) and the rest I could find were old Gentoo related bugs. Any tips?

    Read the article

  • Modern workflow / project architecture with PHP

    - by Sprottenwels
    I wonder how one professional developer would design the backend-part of his application. I try to use PHP as seldom as possible and only for data serving/saving purposes. To do so, i would create a module/class for every greater part of my application, then write some single script files which are called by javascript. E.g: User clicks a "retrieve data" button javascript will call retrieveData.php retrieveData.php will create an instance of the needed class, then executes myInstance-retrieve() the data is returned to my JS by retrieveData.php However, i see some disadvantages in this practice: I will need a single scipt file for every action, (e.g retrieveData.php, saveData.php etc) I need a man-in-the-middle step: from JS to my single script, to my classes How are modern applications designed to accomplish what i need to do?

    Read the article

  • How can I convince a project manager that there is no way to solve all the compatibility issues?

    - by SAFAD
    I have been working on this project for more than a year now, and we are close to release, the project manager wants the product to be perfect and working in every single aspect. I like that and I love working under the perfection idea, but it seems he is delaying the launch too much because of compatibility issues, he wants the product to work in every single installation, every single configuration possible, and in most cases, the product just works without issues when it's on the hands of the client. UPDATE : yes the product doesn't work properly when there are conflicts, for example, other products that don't use guidelines nor standards to load libraries (causes double library load which leads to failure), cache is another example and so on..... but we warn the clients about the conflict before purchasing and help them fixing it after purchasing I've tried to explain it by giving some examples on major products, he understands the situation, but can not believe that it is near impossible (if it is not impossible) to do what he wants. Hope it is clarified enough for the community to answer.

    Read the article

  • Google Analytics account setup for multiple personal websites?

    - by User
    I have multiple personal websites that I develop and plan to develop more over time. The number of websites is currently greater than one but less than 50. Currently I have a single Google account with a single analytics account that has a web property for each of my sites. My understanding is that you can have up to 25 analytics accounts attached to a single google account and each of those 25 acccounts can have up to 50 web properties in them which would allow me to track up to 1,250 sites. I don't think I'll be hitting that number anytime soon, however are there other reasons to structure accounts differently, such as using a separate google account for each site and then adding myself as an administrator?

    Read the article

  • ????Oracle EBS R12 on Exadata V2 ,MMA and Hight Performance

    - by longchun.zhu
    ???????? ?????,???, ????????hands-on ??? ??????: 1. Oracle EBS R12 on Database Machine MAA & Performance Architecture. 2. Oracle EBS R12 Single Instance Node Deployment Procedure. 2.Start Rapid Install Wizard. 3. Oracle EBS R12 Single Instance Node Chinese Patch Update. 4.Applying Patches. 5.Upgrade Application Database Version to 11g Release 2. 6.Database Upgrade 7. Deploy Clone Application Database to Sun Oracle Database Machine. 8.Migrate Application Database File System to Exadata ASM Storage. 9.Covert Application Database Single Instance to RAC.. 10.Configure High Availability & High Performance Architecture with Exadata.

    Read the article

  • Testing chess game

    - by mousey
    There is a software for chess game and we need to test the following method: boolean canMoveTo(int x, int y) x and y are the coordinates of the chess board and it returns true/false whether the piece can move to that position or not. We need to test this method for a pawn piece and you can set up the board any way you like prior to running a test case. Source code is not provided

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • visual assist inserts extra spaces?

    - by Kugel
    I'm using Visual Assist X trial on VS2010 Pro. When I do extract method or modify method signature refactorings it gives me this: void Solver::Work( Stack &s, Board &b ) However I would really appreciate if it gave me this: void Solver::Work(Stack &s, Board &b) No extra spaces. Is there a way to set this?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >