Search Results

Search found 5509 results on 221 pages for 'sound effects'.

Page 190/221 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • Rendering a frame is producing noise from speakers in Windows and Linux

    - by Robber
    When any hardware accelerated application is rendering a frame (or many of them) a very short noise is coming from my speakers. This can be a game, a WebGL application or XBMC. When the application/game is rendering many frames per second (like most of them do) the noise is a continuous buzzing that gets higher pitched with higher framerates. This applies to Linux and Windows, so I'd assume it's a hardware problem. The current hardware in the PC is: CPU: Core2Quad Q9550 GPU: Radeon HD 5770 RAM: 2x2GB DDR2 Motherboard: Asus P5QLD PRO PSU: be quiet! Pure Power 530W Screen and speakers: Old 720p LCD TV connected via VGA and aux cable Muting the TV stops the noise, muting Windows doesn't. I tried replacing the PSU first (used a Tagan 700W PSU before) because I thought it was a power problem. It wasn't. I tried replacing the motherboard (used a ASUS P5B SE before) next because I thought it was a sound card problem. It wasn't. I tried the GPU in a different PC because I thought it was a broken graphics card. It worked perfectly fine in the other PC. I thought it might be interference, but moving the audio cable around changes absolutely nothing. I tried using an HDMI cable instead and that did work, but is not an option since my TV has only one HDMI input and I need that for my PS3.

    Read the article

  • Windows won't boot after moving house. How do I solve this?

    - by James
    Ive just moved house and tried to set up my desktop after packing it away and now when I power it on, the BIOS boots up and no errors are found but when my computer tires to boot into Windows 7 a continuous fast beeping sound is made and a black screen is displayed. What I've done so far: Reset to UEFI defauts Played about with RAM, I had 4*4 GB sticks, I took all of them out to test for a mobo error which I have and now im only using 1 stick of 4 GB. Changed my GPU, I tok my gtx580 out and now im using the onboard Intel 3000 graphics driver, the BIOS and uefi are correctly displaying so I no longer think its a GPU based error. Ive check all of the connections and nothing seems to be loose. My HDD setup is: 2 128 GB SSD's in Raid 0 as my main C drive (possibly cause of error?) 1 1 TB Games drive 1 2 TB Data Drive Ive also got a blueray drive connected. After searching the internet im pretty much out of suggestions but im currently downloading a live CD to see if it will boot and if I can access some files on my HDD.

    Read the article

  • Could replacing an old hard drive's circuit board make it work again?

    - by oscilatingcretin
    I have a 12-year-old, 10gb Maxtor drive that died on me around 7 years ago, but I have not had the heart to throw it away. When the computer powers on, it whirrs silently as it tries to spin up and then it stops. So, a few years ago, I sent it off for professional data recovery. They were able to retrieve quite a bit from it, but I know there's a bunch more there. It only cost $700, so I just chalked up the lackluster recovery effort to "you get what you pay for" considering that most companies will charge you several thousands of dollars for this kind of data recovery. When they sent the drive back, I couldn't help but plug it back in just to see if maybe they unjammed something in the process of disassembling/reassembling the drive. To my surprise, the drive had a much healthier spin-up sound and actually stayed spinning for several minutes before winding down to a halt. Windows is even able to detect and interact with the drive, but I get I/O errors after so many minutes of waiting for it to mount. Before I start doing stupid stuff with it like dropping it on the ground, freezing it, crapping on it, etc, I decided to buy the exact same model off Ebay so that I could swap the circuit boards as a last-ditch effort. While it's en route, I thought I'd come here to ask if this is even a worthwhile effort and, if even remotely so, what should I know before ripping off the old board and slapping on the new?

    Read the article

  • Linux on MacBook Air

    - by enduser
    I'm thinking of getting a MacBook Air. The answers to this post will help me make my decision. My questions and my understanding of current solutions are: How difficult is it to install a Linux-based OS (like Fedora or Ubuntu)? I've heard a little about rEFIt, but am not sure what to make of it. Is it completely necessary? Do I still need it if I don't plan to dual boot with Mac OS X? Also a dual-boot isn't necessary, I'd just like to run Fedora/Ubuntu by itself, but I'm curious to know if a dual boot is simple. Does everything 'just work'? In my current laptop I need to add a wireless driver (Broadcom card). I've heard Macs use Broadcom wireless cards. Will this be an issue? How about graphics/touchpad (& multitouch)/sound? I'm aware there are tutorials out there on how to install some older version of some os on your Mac, but my questions are a bit more general: Will it be easy to use (install and configure drivers for) recent Linux distributions with a new MacBook Air? Note: I don't mind extra configuration, but would like to know where it'll be necessary, because if it's too much of a hassle I'll look at other hardware.

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • System randomly freezes yet mouse still moves

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . EDIT : 16/3/2010 After Richard Holloways reply I thought it relevant to add this info: The reasons why I want to do this are to explore maximising security on any server install and set up, due to interest in the area of Computer Security and Forensics. Also I am trying to peform the task as if it being performed in an enterprise situation. On a technical matter, once set up and configured with minimal packages and ssh this server will not physically be easy to access so I will only be entering via ssh. (Yes I know why encrypt something no one will ever be able to get their hands on? Because I can and I want to is the simple answer, but see above too).

    Read the article

  • Is this DVD drive broken? Brand new, i need help convincing

    - by acidzombie24
    I am asking bc i know dell is going to give me a problem. How do i know if my DVD is broken on my laptop? i burnt 4 DL disc and they ALL failed, i called and dell suggested roxio. I used it and burnt 1 disc without error and the 2nd disc with an error. With both apps there were no 'problems' during the burning process only failed on the verification process. Some of these bad disc dont work on other PCs and one locks up windows when i click a specific file. Does that sound like a broken burner to you guys? when i called dell they told me since it can read disc properly 100% of the time and software doesnt fail in the burning process its not a broken drive _. They forward me to software support who demand a fee (i think $100) to help me fix my software. I am annoyed bc i dont want to be on the phone for them to watch me burn a dvd and since i burned it once correctly i dont want to happen to burn correctly again to have them say they solved my problem (doing nothing) and charge me refusing to refund. -edit- The errors i got were 1) the request could not be performed because of an I/O device error 2) Windows locking up when opening 1 specific file 3) Cannot copy : Data error (crc) NOTE: the file that causes the problems are random every disc

    Read the article

  • Problem with Amiga 1200 accelerator board

    - by cc0
    I just recently walked past a dump, where in the corner of my eye I spotted something that looked like a huge keyboard. I went to take a closer look, and found out that it was an Amiga 1200 with a 030 accellerator board and scala dongle. Jackpot! So anyway; I dried it, cleaned it, it works, but the floppy was not powering on and same with the harddrive. I am using an old Amiga 1200 PSU that was making some strange high pitch noise when I tried to boot the amiga with the harddrive installed in it. I removed the harddrive and it booted fine with the PSU not emitting any detectable noise. However, when I have the 030 installed it sometimes reboots and shows a red "Software Error" screen. I tried removing the memory on the board, same effect. Sometimes it does not boot at all, just gives a black screen. Someone suggested the card had problems with 3.1 roms, but this amiga has only 3.0 roms installed. Does anyone have any apparent theories as to why it seems unstable? I don't have any other Amiga parts to cross-swap with to test a lot of things, so I'd really appreciate some sound input here so I'd know what to look for in order to try fix it. And merry Christmas everyone :]

    Read the article

  • How can I get DVDs playing after a Vista to XP change?

    - by Liath
    I replaced my vista install on a Dell Inspiron 1525 with XP and have managed to get most things up and running again however I'm having trouble with playing DVDs. When I try and play a DVD I get the following message: Windows Media Player cannot play this DVD because there is a problem with digital copy protection between your DVD drive, decoder, and video card. Try installing an updated driver for your video card. I have ensured that my drive is configured to play Region 2 discs (I'm in the UK), I've installed the most up to date XP codec pack which makes me think it's a driver issue. In device manager I have got my DVD drivers up to date however under "Other Devices" I'm missing several which sound key: Audio Device on High Definition Audio Bus Modem Device on High Definition Audio Bus Video Controller Video Controller (VGA Compatible) However I've installed all the relevant drivers I can find on the Dell website. The drive itself is working - I've run software from the drive. I'm afraid I am far from a sys-admin so I'm struggling on this one. How can I get my DVDs playing again?

    Read the article

  • Graphics card ATI Asus 9250 128 MB AGP problem.Monitor switching off.

    - by Dominick1978
    I have an old system with these specs: Motherboard: Via P4x266a (with AGP 4x) CPU: P4 1,7 Ghz Memory: 1152 MB DDR SDRAM Graphics card: Voodoo 3 2000 16 MB PSU: 300 Watt It also has 1 dvd-rom, 1 dvd-rw, 2 hard drives (all 4 connected via molex) , 1 sounblaster sound card and 1 ethernet card (both connected via pci). OS: XP Pro Recently I bought Asus 9250 128 MB AGP to replace voodoo.Wnen I switch on the pc the initial screens (until after the xp logo) sometimes are distorted with blurred colours.When XP are loaded there is some flickering but the rest are ok.XP can't recognize the card seeing it as just a VGA adapter.I have downloaded the latest xp drivers from ATI website and installed them.Then after the restart everything is ok (no distorted image or blurring) until after the xp logo.After this the monitor turns off while the pc is still running.I have tried many drivers but the problem persists (of course I removed the voodoo drivers before from the display adapter properties).Only once I have managed to enter XP (after changing BIOS features for graphics card from 256 MB to 128 MB) but the drivers on the control panel had an exclamation mark (ati 9250!) and below them ati 9250 secondary without ! under the display adapter tab and the ATI catalyst program said that it couldn;t find the card.That was the opnly time I went beyond the xp logo.Now the monitor auto switcheS off. So, what do you think? 1)Is this a broken card? 2)Is it the drivers of the card? 3)Is it PSU fault? 4)Anything else? Thanks for your help and excuse my english!

    Read the article

  • Windows 7 host with Ubuntu Guest and a performance hit, memory locks?

    - by Cyrylski
    I have a brand new Lenovo T510 with Core i5 and 4GB of RAM with Windows 7 on it. I Installed Ubuntu 10.10 in a Virtualbox. For some reason system gets really slow on this setup which makes me really angry. There's a video card shared with full 3D support enabled and 1GB of RAM allocated for the Ubuntu machine. It may sound stupid, but WHY is the whole memory consumed in an instant when I run Virtualbox? I struggled for like 10 minutes restraining myself from a brutal reset, and now everything runs smooth but memory "in use" in Resource Monitor is 3GB flat with only Chrome running. I'm new to Windows 7, but I'm really disappointed with performance at this point... I used to work in a different environment with much slower hardware and there was no such problem (WinXP over Ubuntu, 1GB out of 2GB allocated for WinXP guest on intel GMA). This is, until I clogged RAM totally there. But I was capable of running Chrome, Firefox and Apache server on a 1GB RAM in Ubuntu there and Photoshop CS4 on Windows XP and it worked. In this case I can't go beyond setting up Ubuntu properly. I bet I'm doing something wrong.

    Read the article

  • How can I get DVDs playing after a Vista to XP change? [closed]

    - by Liath
    I replaced my vista install on a Dell Inspiron 1525 with XP and have managed to get most things up and running again however I'm having trouble with playing DVDs. When I try and play a DVD I get the following message: Windows Media Player cannot play this DVD because there is a problem with digital copy protection between your DVD drive, decoder, and video card. Try installing an updated driver for your video card. I have ensured that my drive is configured to play Region 2 discs (I'm in the UK), I've installed the most up to date XP codec pack which makes me think it's a driver issue. In device manager I have got my DVD drivers up to date however under "Other Devices" I'm missing several which sound key: Audio Device on High Definition Audio Bus Modem Device on High Definition Audio Bus Video Controller Video Controller (VGA Compatible) However I've installed all the relevant drivers I can find on the Dell website. The drive itself is working - I've run software from the drive. I'm afraid I am far from a sys-admin so I'm struggling on this one. How can I get my DVDs playing again?

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • what is best multi-server configuration with OpenVPN

    - by sebut
    We have a number of Database severs running MongoDB on Debian plus a number of Application servers also on Debian. The db servers hold replicating db clusters, so they need to talk to each other. Application servers need to talk to all db servers (for reasons of fault tolerance). The servers are potentially spread across multiple hosting centers, so we need secure channels between all servers. The number of servers is bound to grow, so we need a VPN solution that's easy to maintain and expand. This is why I feel that SSH that we use for testing might not be up to the task and OpenVPN seems the way to go. I have ruled out TAP, since I understand that this would mean all traffic going to all the servers - perhaps this is a misunderstanding and TAP acts more like a switch? With TUN devices I imagine that all DB servers would live in their own separate subnet, they would also need a client configured to be able to connect to each of their peers. The application servers could live in a common subnet range with a client config only. Does this sound like a reasonable setup? Strangely, on the web I did not find anything about multi-server with OpenVPN. Thanks for all insights and ideas!

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • What is the effect of this order_by clause?

    - by bread
    I don't understand what this order_by clause is doing and whether I need it or not: select c.customerid, c.firstname, c.lastname, i.order_date, i.item, i.price from items_ordered i, customers c where i.customerid = c.customerid group by c.customerid, i.item, i.order_date order by i.order_date desc; This produces this data: 10330 Shawn Dalton 30-Jun-1999 Pogo stick 28.00 10101 John Gray 30-Jun-1999 Raft 58.00 10410 Mary Ann Howell 30-Jan-2000 Unicycle 192.50 10101 John Gray 30-Dec-1999 Hoola Hoop 14.75 10449 Isabela Moore 29-Feb-2000 Flashlight 4.50 10410 Mary Ann Howell 28-Oct-1999 Sleeping Bag 89.22 10339 Anthony Sanchez 27-Jul-1999 Umbrella 4.50 10449 Isabela Moore 22-Dec-1999 Canoe 280.00 10298 Leroy Brown 19-Sep-1999 Lantern 29.00 10449 Isabela Moore 19-Mar-2000 Canoe paddle 40.00 10413 Donald Davids 19-Jan-2000 Lawnchair 32.00 10330 Shawn Dalton 19-Apr-2000 Shovel 16.75 10439 Conrad Giles 18-Sep-1999 Tent 88.00 10298 Leroy Brown 18-Mar-2000 Pocket Knife 22.38 10299 Elroy Keller 18-Jan-2000 Inflatable Mattress 38.00 10438 Kevin Smith 18-Jan-2000 Tent 79.99 10101 John Gray 18-Aug-1999 Rain Coat 18.30 10449 Isabela Moore 15-Dec-1999 Bicycle 380.50 10439 Conrad Giles 14-Aug-1999 Ski Poles 25.50 10449 Isabela Moore 13-Aug-1999 Unicycle 180.79 10101 John Gray 08-Mar-2000 Sleeping Bag 88.70 10299 Elroy Keller 06-Jul-1999 Parachute 1250.00 10438 Kevin Smith 02-Nov-1999 Pillow 8.50 10101 John Gray 02-Jan-2000 Lantern 16.00 10315 Lisa Jones 02-Feb-2000 Compass 8.00 10449 Isabela Moore 01-Sep-1999 Snow Shoes 45.00 10438 Kevin Smith 01-Nov-1999 Umbrella 6.75 10298 Leroy Brown 01-Jul-1999 Skateboard 33.00 10101 John Gray 01-Jul-1999 Life Vest 125.00 10330 Shawn Dalton 01-Jan-2000 Flashlight 28.00 10298 Leroy Brown 01-Dec-1999 Helmet 22.00 10298 Leroy Brown 01-Apr-2000 Ear Muffs 12.50 While if I remove the order_by clause completely, as in this query: select c.customerid, c.firstname, c.lastname, i.order_date, i.item, i.price from items_ordered i, customers c where i.customerid = c.customerid group by c.customerid, i.item, i.order_date; I get these results: 10101 John Gray 30-Dec-1999 Hoola Hoop 14.75 10101 John Gray 02-Jan-2000 Lantern 16.00 10101 John Gray 01-Jul-1999 Life Vest 125.00 10101 John Gray 30-Jun-1999 Raft 58.00 10101 John Gray 18-Aug-1999 Rain Coat 18.30 10101 John Gray 08-Mar-2000 Sleeping Bag 88.70 10298 Leroy Brown 01-Apr-2000 Ear Muffs 12.50 10298 Leroy Brown 01-Dec-1999 Helmet 22.00 10298 Leroy Brown 19-Sep-1999 Lantern 29.00 10298 Leroy Brown 18-Mar-2000 Pocket Knife 22.38 10298 Leroy Brown 01-Jul-1999 Skateboard 33.00 10299 Elroy Keller 18-Jan-2000 Inflatable Mattress 38.00 10299 Elroy Keller 06-Jul-1999 Parachute 1250.00 10315 Lisa Jones 02-Feb-2000 Compass 8.00 10330 Shawn Dalton 01-Jan-2000 Flashlight 28.00 10330 Shawn Dalton 30-Jun-1999 Pogo stick 28.00 10330 Shawn Dalton 19-Apr-2000 Shovel 16.75 10339 Anthony Sanchez 27-Jul-1999 Umbrella 4.50 10410 Mary Ann Howell 28-Oct-1999 Sleeping Bag 89.22 10410 Mary Ann Howell 30-Jan-2000 Unicycle 192.50 10413 Donald Davids 19-Jan-2000 Lawnchair 32.00 10438 Kevin Smith 02-Nov-1999 Pillow 8.50 10438 Kevin Smith 18-Jan-2000 Tent 79.99 10438 Kevin Smith 01-Nov-1999 Umbrella 6.75 10439 Conrad Giles 14-Aug-1999 Ski Poles 25.50 10439 Conrad Giles 18-Sep-1999 Tent 88.00 10449 Isabela Moore 15-Dec-1999 Bicycle 380.50 10449 Isabela Moore 22-Dec-1999 Canoe 280.00 10449 Isabela Moore 19-Mar-2000 Canoe paddle 40.00 10449 Isabela Moore 29-Feb-2000 Flashlight 4.50 10449 Isabela Moore 01-Sep-1999 Snow Shoes 45.00 10449 Isabela Moore 13-Aug-1999 Unicycle 180.79 I'm not sure what the order_by is doing here and if it's having the intended effects.

    Read the article

  • Javascript Noob: How to emulate slideshow on front page by automatically cycling through existing ho

    - by Zildjoms
    hey everyone. hope you could help me out am working on this website and i've finished all the hover effects i like - they're exactly how i want them to be: http://s5ent.brinkster.net/beta3.asp - try hovering over the four links and you'll see a very simple fade effect at work, which degrades into a regular css hover without javascript. what i plan to do is to make the page look like it had a fancy slideshow going on upon loading and while idle, and i wanted to achieve that by capitalizing on the existing hover styling/behavior of the main page links instead of using another script to create the effect from scratch. to do this i imagined i'll need a script that emulates a hover action on each link at regular time intervals once the page has loaded, starting from left to right (footcare, lawn & equipment, about us, contact us), looping through all 4 links indefinitely (footcare, lawn & equipment, about us, contact us, footcare, lawn& equipment, etc.) but pauses when any of them have been actually hovered over by a viewer and resumes from wherever the user left off upon mouseout. hope you get my drift... i also want to achieve this without unnecessarily disrupting the current html. so i guess everything will have to be done by scripting as much as possible.. i'm very new to javascript and jquery. as you can see at s5ent.brinkster.net/beta3.1-autohover.asp, the following script i made works wrong: it hovers-on all of them at the same time and doesn't hover-out anymore. when you try to actually hover into and out of each link the link just comes back on: <script type="text/javascript"> $(document).ready(function () { var speed = 5000; var run = setInterval('rotate()', speed); }); function rotate() { $('.lilevel1 a').each(function(i) { $(this).mouseover(); }); } </script> it's just gross. aside from the fact that this last bit of script isn't even working in ie. could you please help me make this thing happen? that'd be really sweet, guys. i know there are tonsa geniuses out there who could whip this up in no time. or if you have a better way to go about it by all means kindly lemme know. thanks guys, hope you're all havin a blast.

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • How to get crossSlide and lightbox2 working together on the same page.

    - by imHavoc
    (CrossSlide) (LightBox) This is my header: <script type="text/javascript" src="<?php echo ROOT.'sources/js/jquery.js'; ?>"></script> <script type="text/javascript" src="<?php echo ROOT.'sources/js/contentSlider/jquery.cross-slide.js'; ?>"></script> <link rel="stylesheet" href="<?php echo ROOT.'sources/css/lightbox.css'; ?>" type="text/css" media="screen" /> <script type="text/javascript" src="<?php echo ROOT.'sources/js/lightbox/prototype.js'; ?>"></script> <script type="text/javascript" src="<?php echo ROOT.'sources/js/lightbox/scriptaculous.js?load=effects,builder'; ?>"></script> <script type="text/javascript" src="<?php echo ROOT.'sources/js/lightbox/lightbox.js'; ?>"></script> This is my body: <script type="text/javascript"> $(function() { $('#imgHold').crossSlide({ sleep: 3, fade: .5 }, [ { src: 'images/featured/ftcont_img1.png' }, { src: 'images/featured/ftcont_img2.png' }, { src: 'images/featured/ftcont_img3.png' }, { src: 'images/featured/ftcont_img4.png' } ]); }); </script> <div id="ftIMG"><div id="imgHold">Loading...</div></div> I don't have anything using the lightbox script on this page. But I want the keep the script in the header so in PHP I only have to call up 1 header. The LightBox "manual" said to add "initLightbox(); to the onload attribute on the body tag, so I did that and nothing changed. Now I also read somewhere else about a (jQuery.no-conflict), im wondering if this would be the way to proceed. Or if there is another way to fix this problems. Also, if I want to use (ThickBox3.1) on the same page with everything else. Would it be possible, and how to do so exactly? Also, sorry guys about not posting them up as links, apparently new users are not allowed to post up more than 1 link.

    Read the article

  • What should I do or don't do to avoid Delphi "push dword" bug.

    - by Maksee
    I found that Delphi 5 generates invalid assembly code in specific cases. I can't understand in what cases in general. The example below produces access violation since a very strange optimization occurs. For a byte in a record or array Delphi generates push dword [...], pop ebx, mov .., bl that works correctly if there are data after this byte (we need at least three to push dword correctly), but fails if the data is inaccessible. I emulated the strict boundaries here with win32 Virtual* functions Specifically the error occurs when the last byte from the block accessed inside FeedBytesToClass procedure. And if I try to change something like using data array instead of object property of remove actionFlag variable, Delphi generates correct assembly instructions. const BlockSize = 4096; type TSomeClass = class private fBytes: PByteArray; public property Bytes: PByteArray read fBytes; constructor Create; destructor Destroy;override; end; constructor TSomeClass.Create; begin inherited Create; GetMem(fBytes, BlockSize); end; destructor TSomeClass.Destroy; begin FreeMem(fBytes); inherited; end; procedure FeedBytesToClass(SrcDataBytes: PByteArray; Count: integer); var j: integer; Ofs: integer; actionFlag: boolean; AClass: TSomeClass; begin AClass:=TSomeClass.Create; try actionFlag:=true; for j:=0 to Count-1 do begin Ofs:=j; if actionFlag then begin AClass.Bytes[Ofs]:=SrcDataBytes[j]; end; end; finally AClass.Free; end; end; procedure TForm31.Button1Click(Sender: TObject); var SrcDataBytes: PByteArray; begin SrcDataBytes:=VirtualAlloc(Nil, BlockSize, MEM_COMMIT, PAGE_READWRITE); try if VirtualLock(SrcDataBytes, BlockSize) then try FeedBytesToClass(SrcDataBytes, BlockSize); finally VirtualUnLock(SrcDataBytes, BlockSize); end; finally VirtualFree(SrcDataBytes, MEM_DECOMMIT, BlockSize); end; end; Initially the error occured when I used access to RGB data of bitmap bits, but the code there is too complex so I narrowed it to this fragment. So the question is what is here so specific that makes Delphi produce push,pop,mov optimization. I need to know this in order to avoid such side effects in general.

    Read the article

  • What's the best way to structure this Linq-to-Events Drag & Drop code?

    - by Rob Fonseca-Ensor
    I am trying to handle a drag & drop interaction, which involves mouse down, mouse move, and mouse up. Here is a simplified repro of my solution that: on mouse down, creates an ellipse and adds it to a canvas on mouse move, repositions the ellipse to follow the mouse on mouse up, changes the colour of the canvas so that it's obvious which one you're dragging. var mouseDown = Observable.FromEvent<MouseButtonEventArgs>(canvas, "MouseLeftButtonDown"); var mouseUp = Observable.FromEvent<MouseButtonEventArgs>(canvas, "MouseLeftButtonUp"); var mouseMove = Observable.FromEvent<MouseEventArgs>(canvas, "MouseMove"); Ellipse ellipse = null; var q = from start in mouseDown.Do(x => { // handle mousedown by creating a red ellipse, // adding it to the canvas at the right position ellipse = new Ellipse() { Width = 10, Height = 10, Fill = Brushes.Red }; Point position = x.EventArgs.GetPosition(canvas); Canvas.SetLeft(ellipse, position.X); Canvas.SetTop(ellipse, position.Y); canvas.Children.Add(ellipse); }) from delta in mouseMove.Until(mouseUp.Do(x => { // handle mouse up by making the ellipse green ellipse.Fill = Brushes.Green; })) select delta; q.Subscribe(x => { // handle mouse move by repositioning ellipse Point position = x.EventArgs.GetPosition(canvas); Canvas.SetLeft(ellipse, position.X); Canvas.SetTop(ellipse, position.Y); }); the XAML is simply <Canvas x:Name="canvas"/> There's a few things I don't like about this code, and I need help refactoring it :) First of all: the mousedown and mouseup callbacks are specified as side effects. If two subscriptions are made to q, they will happen twice. Second, the mouseup callback is specified before the mousemove callback. This makes it a bit hard to read. Thirdly, the reference to the ellipse seems to be in a silly place. If there's two subscriptions, that variable reference will get overwritten quite quickly. I'm sure that there should be some way we can leverage the let keyword to introduce a variable to the linq expression that will mean the correct ellipse reference is available to both the mouse move and mouse up handlers How would you write this code?

    Read the article

  • css formatting problems

    - by Davey
    So I wanted to mess around with Sass so I created this tiny incredibly simple little page. For reasons unknown to me when I add a top-margin to #content it effects my wrapper div instead. Any information as to why this is happening would be greatly appreciated. Thanks! Here is the page live: http://cheapramen.com/omg/ html: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <link rel="stylesheet" href="stylesheets/screen.css" type="text/css" media="screen" /> <title>YO</title> </head> <body> <div id="wrapper"> <div id="content"> <div class="dog"> Bury me with my money </div> </div> </div> </body> </html> Sass html body background-color: #000 padding: 0 margin: 0 height: 100% #wrapper background-color: #fff width: 900px height: 700px margin: 0 auto #content width: 500px margin: 150px 0 0 50px .dog color: #FF3836 font-size: 25px text-shadow: 2px 2px 2px #000 width: 500px height: 500px -moz-border-radius: 5px -webkit-border-radius: 5px border: 18px solid #FF3836 Css that the Sass spits out html body { background-color: #000; padding: 0; margin: 0; height: 100%; } #wrapper { background-color: #fff; width: 900px; height: 700px; margin: 0 auto; } #content { width: 500px; margin: 150px 0 0 50px; } .dog { color: #FF3836; font-size: 25px; text-shadow: 2px 2px 2px #000; width: 500px; height: 500px; -moz-border-radius: 5px; -webkit-border-radius: 5px; border: 18px solid #FF3836; }

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >