Search Results

Search found 40339 results on 1614 pages for 'best settings'.

Page 56/1614 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • What are the best practices for unit testing properties with code in the setter?

    - by nportelli
    I'm fairly new to unit testing and we are actually attempting to use it on a project. There is a property like this. public TimeSpan CountDown { get { return _countDown; } set { long fraction = value.Ticks % 10000000; value -= TimeSpan.FromTicks(fraction); if(fraction > 5000000) value += TimeSpan.FromSeconds(1); if(_countDown != value) { _countDown = value; NotifyChanged("CountDown"); } } } My test looks like this. [TestMethod] public void CountDownTest_GetSet_PropChangedShouldFire() { ManualRafflePresenter target = new ManualRafflePresenter(); bool fired = false; string name = null; target.PropertyChanged += new PropertyChangedEventHandler((o, a) => { fired = true; name = a.PropertyName; }); TimeSpan expected = new TimeSpan(0, 1, 25); TimeSpan actual; target.CountDown = expected; actual = target.CountDown; Assert.AreEqual(expected, actual); Assert.IsTrue(fired); Assert.AreEqual("CountDown", name); } The question is how do I test the code in the setter? Do I break it out into a method? If I do it would probably be private since no one else needs to use this. But they say not to test private methods. Do make a class if this is the only case? would two uses of this code make a class worthwhile? What is wrong with this code from a design standpoint. What is correct?

    Read the article

  • DataSet size best practices - are there any general rules?

    - by Galwegian
    I'm working on a desktop application that will produce several in-memory datasets as an intermediary before being committed to a database. Obviously I'm going to try to keep the size of these to a minimum, but are there any guidelines on thresholds I shouldn't cross for good functionality on an 'average' machine? Thanks for any help.

    Read the article

  • Preprocessor #define vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • .NET immutable object usage best practices? Should I be using them as much as possible?

    - by Daniel
    Say I have a simple object such as class Something { public int SomeInt { get; set; } } I have read that using immutable objects are faster and a better means of using business objects? If this is so, should i strive to make all my objects as such: class ImmutableSomething { public int SomeInt { get { return m_someInt; } } private int m_someInt = 0; public void ChangeSomeInt(int newValue) { m_someInt = newvalue; } } What do you reckon?

    Read the article

  • What is the best approach of creating a login System?

    - by Starx
    I am always wondering that the login systems I have created is vulnerable to attacks or not. As many other programmers I also use sessions to hold a specific token token to know the login status. Cookies to hold the username or even sometime saved status. What I am wondering is, Is this the right way? Is there any approach better that this?

    Read the article

  • Best practice: How to persist simple data without a database in django?

    - by Infinity
    I'm building a website that doesn't require a database because a REST API "is the database". (Except you don't want to be putting site-specific things in there, since the API is used by mostly mobile clients) However there's a few things that normally would be put in a database, for example the "jobs" page. You have master list view, and the detail views for each job, and it should be easy to add new job entries. (not necessarily via a CMS, but that would be awesome) e.g. example.com/careers/ and example.com/careers/77/ I could just hardcode this stuff in templates, but that's no DRY- you have to update the master template and the detail template every time. What do you guys think? Maybe a YAML file? Or any better ideas? Thx

    Read the article

  • Best way to make an attribute always an array?

    - by Shadowfirebird
    I'm using my MOO project to teach myself Test Driven Design, and it's taking me interesting places. For example, I wrote a test that said an attribute on a particular object should always return an array, so -- t = Thing.new("test") p t.names #-> ["test"] t.names = nil p t.names #-> [] The code I have for this is okay, but it doesn't seem terribly ruby to me: class Thing def initialize(names) self.names = names end def names=(n) n = [] if n.nil? n = [n] unless n.instance_of?(Array) @names = n end attr_reader :names end Is there a more elegant, Ruby-ish way of doing this? (NB: if anyone wants to tell me why this is a dumb test to write, that would be interesting too...)

    Read the article

  • What's the best practice in case something goes wrong in Perl code?

    - by Geo
    I saw code which works like this: do_something($param) || warn "something went wrong\n"; and I also saw code like this: eval { do_something_else($param); }; if($@) { warn "something went wrong\n"; } Should I use eval/die in all my subroutines? Should I write all my code based on stuff returned from subroutines? Isn't eval'ing the code ( over and over ) gonna slow me down?

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • Is it the best practice to extract an interface for every class?

    - by the_drow
    I have seen code where every class has an interface that it implements. Sometimes there is no common interface for them all. They are just there and they are used instead of concreate objects. They do not offer a generic interface for two classes and are specific to the domain of the problem that the class solves. Is there any reason to do that?

    Read the article

  • Uncheck Automatically detect proxy for Terminal Server users via GPO

    - by Chris
    Good morning, I have a registry key that changes local users Internet Explorer LAN Settings to uncheck the "Automatically detect settings" tickbox. When I add this policy to the Terminal Servers user group it has no effect. I exported this key from my own registry after unticking the box. My computer runs Windows Vista Business Edition. Environment: We are using Server 2008 RC2 environment - two terminal servers with a session broker. Any idea's on how to get this working?

    Read the article

  • Howto make a diff of a bios or backup/ restore all bios settings

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Zimbra MTA settings

    - by user192702
    Hi have some questions for Zimbra v8.0.6GA. Under Configure - MTA - Network, I'm seeing a few settings and am not very clear what to do with them. Web mail MTA Host name Is this for delivering local mail only (ie not for external mails)? According to this link, it says the following. That's a mouthful but what is "composed messages"? Is this for a multi server deployment where the Postfix server for Zimbra isn't installed on the same box that as the rest of the servers? Webmail MTA is used by the Zimbra server for composed messages and must be the location of the Postfix server in the Zimbra MTA. Relay MTA for external delivery My understanding after reading the doc is that if my ISP doesn't force me to relay outgoing mails through them, and I have enabled DNS lookup, I can leave this blank? Inbound SMTP host name Sorry I know this is explained as "If your MX records point to a spam-relay or any other external non-Zimbra server, enter the name of that server in the Inbound SMTP host name field." but I'm not following. Can someone provide an example? MTA Trusted Networks The admin doc says "To set up MTA trusted networks on a per server basis, make sure that MTA trusted networks have been set up as global settings and then go the Configure Servers MTA page and in the MTA Trusted Networks field enter the trusted network addresses for the server." However I see out of the box it has default networks setup for the server whereas on a global level it's blank. Does this mean there is a bug with the install software and I have to copy the setting from the server to the global setting?

    Read the article

  • Broken UAC, cant edit File/folders or change settings in user account

    - by Antoros
    It appears that UAC is broken cant move/delete some file/folders, when asked to use administrative right (and say yes) it shows the loading bar and then nothings gets moved/deleted, no error whatsoever opened control panel, user account and when i click on any option with the Shield (administrative rights) the mouse changes to loading and thengo back to normal, not opening any menu or showing any error Already done a sfc /scannow , no errors found Already used Microsoft's Fix it, reclycle bin broken and repaired, still the same error Used microsoftaccounts tool , this are the errors i got: Problem with microsoft account policy... <- this is the problem (didint fix) Trust this PC <- loop of redirections, cant get to trust this pc (only one with w8) Problems witht sytem registration : i think it is because of the soft system reset Some setting have sync turned off : i never configured anything to sync Rootcauses found and created logs : would like to know where the logs are saved... had to use a ".reg" file to change uac setting to never notify, thinking it would fix this, no, it stoped asking though, i can still open a cmd with Administrative rights, but cant access to UAC settings Accesed Administrator account (net user administrator /active:yes)and even with that account could change any settings so there it is, dont know what else to do in the moment (this pc broke with 8.1 update and was restored to factory configuration, it broke several drivers and kept most of the registry entries, i cant find the cause of this problem. other info: i tried to delete a file in a program folder and couldnt, downloaded unlocker to check first if it was permissions but no, it showed me a msg telling me that there wasnt any error, and if i would like to delete it, clicked on yes, and it did delete it, what amuses me is that i cant without this tool, not even using the feature that takes over ownership Edit: wow, in a not crappy pc chkdsk is fast, completed with no errors found : /

    Read the article

  • Prestashop is not saving Memcached settings

    - by ianenri
    I have a issue with the admin site of prestashop. I'm trying to activate the caching system but it doesn't save the setting. When I add a server (I'm using an Amazon ElasticCache server) saves it, but when I select the enable option and click Save, it redirects me to "Back Office Preferences Performance" but with a blank page and the admin tabs visible. I go back to those settings again and i see that the caching option is disabled. Also, there is a warning: "To use Memcached, you must install the Memcache PECL extension on your server. http://www.php.net/manual/en/memcache.installation.php" , even if i already installed memcached via yum. I tried also, by modifying the settings.inc.php file editing define('_PS_CACHE_ENABLED_', '0'); to: define('_PS_CACHE_ENABLED_', '1'); But i get 500 Internal Server Error in every page, so i prefer to leave it as before. Any ideas? I'm using PrestaShop 1.4.6.2 with Nginx 1.0.11 and PHP-FPM 5.3.8 in a CentOS 5.7 system.

    Read the article

  • Would like to change audio codec, but keep video settings with ffmpeg

    - by Craig Tataryn
    I have a video for which I'd like to convert the audio codec to AAC 320 kbps / 44.100 kHz. What would I use for ffmpeg switches such that all the video settings and codec remain the same, but only the audio codec and settings change? Here's my video: $ ffmpeg -i Winnipeg.rb\ Scala-Talk.mov FFmpeg version SVN-r25375, Copyright (c) 2000-2010 the FFmpeg developers built on Oct 6 2010 13:02:41 with gcc 4.2.1 (Apple Inc. build 5664) configuration: --enable-libmp3lame --enable-shared --disable-mmx --arch=x86_64 libavutil 50.32. 2 / 50.32. 2 libavcore 0. 9. 1 / 0. 9. 1 libavcodec 52.92. 0 / 52.92. 0 libavformat 52.80. 0 / 52.80. 0 libavdevice 52. 2. 2 / 52. 2. 2 libavfilter 1.48. 0 / 1.48. 0 libswscale 0.12. 0 / 0.12. 0 Seems stream 0 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 10.00 (10/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Winnipeg.rb Scala-Talk.mov': Metadata: major_brand : qt minor_version : 537199360 compatible_brands: qt Duration: 01:10:53.00, start: 0.000000, bitrate: 283 kb/s Stream #0.0(eng): Video: h264, yuv420p, 800x598, 94 kb/s, 10 fps, 10 tbr, 1k tbn, 2k tbc Stream #0.1(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 Stream #0.2(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 At least one output file must be specified Many thanks in advance! One with with ffmpeg I've never been able to grok is how to just "tweak" files without having to regurgitate every little setting for things you don't want changes.

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • User Group Policy in Server 2008 to set Default Profile settings

    - by Chris
    I have computers to deploy and want to apply changes to the default user policy on these PCs automatically. What's the best way to do this? Our current procedure is: Create the computer account in an OU called "Deployment" on our server Unbox the PC Login as the user who will be receiving the PC Change settings (pre-configure outlook, authorize Office, etc.) move computer account to correct OU Place the PC on the users desk. I would like to make as many of the changes in step #4 with Group Policies applied to the Deployment OU if possible since they're largely repeated for every computer. There are a dozen policies created and the computer ones apply correctly but the user policies do not. I understand this is because the end user is not in our "Deployment" OU. I don't want to apply these settings to the user at their current station just the new PC I'm working on. I believe I have the desired effect with Group Policy Loopback Replace enabled on policies that need user policies changed but this just feels wrong/inefficient/complicated to maintain. Am I doing this correctly? Is Group Policy Loopback the only way to change user accounts on one computer? What do you do to setup a user on a new PC?

    Read the article

  • Apache 2 settings for high traffic website

    - by Harry
    I'm having problems with the load on my website. It's an amazon ec2 server with 15Gb ram and 4 CPUs behind an LB. apachetop says I'm getting around 80 reqs per second which seems really low for this kind of server and the load ( given by top ) is usually around 15 but does increase to about 150 in 24 hrs. I'm seeing about 100 active apache processes at any time. Apache is in prefork mode. Mysql is used very little on the server and there are almost no static files. Here are my Apache settings: Timeout 20 KeepAlive Off MaxKeepAliveRequests 0 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 40 MinSpareServers 25 MaxSpareServers 40 ServerLimit 400 MaxClients 400 MaxRequestsPerChild 4 </IfModule> Can anyone advise on how to tweak the settings? Thanx! Edit: The config was gotten by trial and error. Any and I mean by a number, change to these lines make the load skyrocket in like 5 minutes. It literally jumps to like 200-300 in a matter of minutes. Especially MaxRequestsPerChild. I've tried with 10, 15, 100, 1000 and the load just skyrockets. About php - there are actually only a few php files which aren't really that expensive at all. They just spit some simple stuff out. If I turn on KeepAlive load also goes to space..

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >