Search Results

Search found 32011 results on 1281 pages for 'chris good'.

Page 88/1281 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • is it a good idea to write tests for environments other than development?

    - by jcollum
    Let's say I have a (fairly typical) set of environments: PROD, UAT, QA, DEV. Is it a good idea to run your tests across all environments? Here's what I'm thinking of. I have a proc in SQL that my code depends on, I'll call it proc_getActiveCustomers. If that proc isn't present my app will go south real fast. So I write a test that checks for the existence of this proc in the database. Nothing new here. But when I then deploy my app to the QA environment, would I also want to have a test that checks that environment for the existence of proc_getActiveCustomers? I think this is a good idea but I've never heard much about testing in environments outside of development. Makes me wonder if there's some downside I'm not aware of. The direction that I'm going is to have a list of environments in code and then passing that environment into my unit test.

    Read the article

  • Python: What's a correct and good way to implement __hash__()?

    - by random-name
    What's a correct and good way to implement hash()? I am talking about the function that returns a hashcode that is then used to insert objects into hashtables aka dictionaries. As hash() returns an integer and is used for "binning" objects into hashtables I assume that the values of the returned integer should be uniformly distributed for common data (to minimize collisions). What's a good practice to get such values? Are collisions a problem? In my case I have a small class which acts as a container class holding some ints, some floats and a string.

    Read the article

  • What are some good practice assignments for learning Java?

    - by HW
    Hello, I am a computer science in my Student Second year. I already know a good deal about C++, Data Structures, File Structures, OOP, etc. I decided to learn Java. I have read couple of books but I know that it takes practice to master any Programming language. I was wondering if anyone knew of some assignments or problems that helped them become good at programming. I am looking for something more challenging than "hello world"s and "3+2=5"s exercises. Thanks, ~HW

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • What Apache/PHP configurations do you know and how good are they?

    - by FractalizeR
    Hello. I wanted to ask you about PHP/Apache configuration methods you know, their pros and cons. I will start myself: ---------------- PHP as Apache module---------------- Pros: good speed since you don't need to start exe every time especially in mpm-worker mode. You can also use various PHP accelerators in this mode like APC or eAccelerator. Cons: if you are running apache in mpm-worker mode, you may face stability issues because every glitch in any php script will lead to unstability to the whole thread pool of that apache process. Also in this mode all scripts are executed on behalf of apache user. This is bad for security. mpm-worker configuration requires PHP compiled in thread-safe mode. At least CentOS and RedHat default repositories doesn't have thread-safe PHP version so on these OSes you need to compile at least PHP yourself (there is a way to activate worker mpm on Apache). The use of thread-safe PHP binaries is considered experimental and unstable. Plus, many PHP extensions does not support thread-safe mode or were not well-tested in thread-safe mode. ---------------- PHP as CGI ---------------- This seems to be the slowest default configuration which seems to be a "con" itself ;) ---------------- PHP as CGI via mod_suphp ---------------- Pros: suphp allows you to execute php scipts on behalf of the script file owner. This way you can securely separate different sites on the same machine. Also, suphp allows to use different php.ini files per virtual host. Cons: PHP in CGI mode means less performance. In this mode you can't use php accelerators like APC because each time new process is spawned to handle script rendering the cache of previous process useless. BTW, do you know the way to apply some accelerator in this config? I heard something about using shm for php bytecode cache. Also, you cannot configure PHP via .htaccess files in this mode. You will need to install PECL htscanner for this if you need to set various per-script options via .htaccess (php_value / php_flag directives) ---------------- PHP as CGI via suexec ---------------- This configuration looks the same as with suphp, but I heard, that it's slower and less safe. Almost same pros and cons apply. ---------------- PHP as FastCGI ---------------- Pros: FastCGI standard allows single php process to handle several scripts before php process is killed. This way you gain performance since no need to spin up new php process for each script. You can also use PHP accelerators in this configuration (see cons section for comment). Also, FCGI almost like suphp also allows php processes to be executed on behalf of some user. mod_fcgid seems to have the most complete fcgi support and flexibility for apache. Cons: The use of php accelerator in fastcgi mode will lead to high memory consumption because each PHP process will have his own bytecode cache (unless there is some accelerator that can use shared memory for bytecode cache. Is there such?). FastCGI is also a little bit complex to configure. You need to create various configuration files and make some configuration modifications. It seems, that fastcgi is the most stable, secure, fast and flexible PHP configuration, however, a bit difficult to be configured. But, may be, I missed something? Comments are welcome!

    Read the article

  • What is a good alternative to the WPF WebBrowser Control?

    - by VoidDweller
    I have an MDI WPF app that I need to add web content to. At first, great it looks like I have 2 options built into the framework the Frame control and the WebBrowser control. Given that this is an MDI app it doesn't take long to discover that neither of these will work. The WPF WebBrowser control wraps up the IE WebBrowser ActiveX Control which uses the Win32 graphics pipeline. The "Airspace" issue pretty much sums this up as "Sorry, the layouts will not play nice together". Yes, I have thought about taking snapshots of the web content rendering these and mapping the mouse and keyboard events back to the browser control, but I can't afford the performance penalty and I really don't have time to write and thoroughly test it. I have looked for third party controls, but so far I have only found Chris Cavanagh's WPF Chromium Web Browser control. Which wraps up Awesomium 1.5. Together these are very cool, they play nice with the WPF layouts. But they do not meet my performance requirements. They are VERY HEAVY on memory consumption and not to friendly with CPU usage either. Not to mention still quite buggy. I'll elaborate if you are interested. So, do any of you know of a stable performant WPF web browser control? Thanks.

    Read the article

  • Does anyone know of a good Commercial WPF Web Browser Control?

    - by VoidDweller
    I have an MDI WPF app that I need to add web content too. At first, great it looks like I have 2 options built into the framework the Frame control and the WebBrowser control. Given that this is an MDI app it doesn't take long to discover that neither of these will work. The WebBrowser control wraps up the IE WebBrowser ActiveX Control which uses the Win32 graphics pipeline. The "Airspace" issue pretty much sums this up as "Sorry, the layouts will not play nice together". Yes, I have thought about taking snapshots of the web content rendering these and mapping the mouse and keyboard events back to the browser control, but I can't afford the performance penalty and I really don't have time to write and thoroughly test it. I have looked for third party controls, but so far I have only found Chris Cavanagh's WPF Chromium Web Browser control. Which wraps up Awesomium 1.5. Together these are very cool, they play nice with the WPF layouts. But they do not meet my performance requirements. They are VERY HEAVY on memory consumption and not to friendly with CPU usage either. Not to mention still quite buggy. I'll elaborate if you are interested. So, do any of you know of a stable performant WPF web browser control? Thanks.

    Read the article

  • Manchester UG Presentation Video

    In July I was invited to speak at the UK SQL Server UG event in Manchester.  I spoke about Excel being a good data mining client.  I was a little rushed at the end as Chris Testa-ONeill told me I had only 5 minutes to go when I had only been talking for 10 minutes.  Apparently I have a reputation for running over my time allocation.  At the event we also had a product demo from SQL Sentry around their BI monitoring dashboard solution.  This includes SSIS but the main thrust was SSAS Then came Chris with a look at Analysis Services.  If you have never heard Chris talk then take the opportunity now, he is a top class presenter and I am often found sat at the back of his classes. Here is the video link

    Read the article

  • Manchester UG Presentation Video

    In July I was invited to speak at the UK SQL Server UG event in Manchester.  I spoke about Excel being a good data mining client.  I was a little rushed at the end as Chris Testa-ONeill told me I had only 5 minutes to go when I had only been talking for 10 minutes.  Apparently I have a reputation for running over my time allocation.  At the event we also had a product demo from SQL Sentry around their BI monitoring dashboard solution.  This includes SSIS but the main thrust was SSAS Then came Chris with a look at Analysis Services.  If you have never heard Chris talk then take the opportunity now, he is a top class presenter and I am often found sat at the back of his classes. Here is the video link

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Udacity: Teaching thousands of students to program online using App Engine

    Udacity: Teaching thousands of students to program online using App Engine Join Fred Sauer & Iein Valdez as they talk with Steve Huffman, founder of Reddit and Hipmunk, and Chris Chew, senior software engineer at Udacity. Steve will share his experience teaching a course on web development using App Engine at Udacity, and Chris will talk about his experience building Udacity itself using App Engine. Submit your questions for Steve and Chris to answer live on air. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • What is a good standard exercise to learn the OO features of a language?

    - by FarmBoy
    When I'm learning a new language, I often program some mathematical functions to get used to the control flow syntax. After that, I like to implement some sorting algorithms to get used to the array/list constructs. But I don't have a standard exercise for exploring the languages OO features. Does anyone have a stock exercise for this? A good answer would naturally lend to inheritance, polymorphism, etc., for a programmer already comfortable with these concepts. An ideal answer would be one that could be communicated in a few words, without ambiguity, in the way that "implement mergesort" is completely unambiguous. (As an example, answering "design a game" is so vague as to be useless.) Any ideas? EDIT: I have to remark that the results here are somewhat ironic. 10 upvotes and (originally) 5 favorites suggest that this is a question others are interested in. Yet the most upvoted answer is one that says there is no good answer. Oh well. I think I'll look at the textbook below, I've found games useful in the past for OO.

    Read the article

  • Is this a good KVO-compliant way to model a mutable to-many relationship?

    - by andyvn22
    Say I'd like a mutable, unordered to-many relationship. For internal optimization reasons, it'd be best to store this in an NSMutableDictionary rather than an NSMutableSet. But I'd like to keep that implementation detail private. I'd also like to provide some KVO-compliant accessors, so: - (NSSet*)things; - (NSUInteger)countOfThings; - (void)addThings:(NSSet*)someThings; - (void)removeThings:(NSSet*)someThings; Now, it'd be convenient and less evil to provide accessors (private ones, of course, in my implementation file) for the dictionary as well, so: @interface MYClassWithThings () @property (retain) NSMutableDictionary* keyedThings; @end This seems good to me! I can use accessors to mess with my keyedThings within the class, but other objects think they're dealing with a mutable, unordered (, unkeyed!) to-many relationship. I'm concerned that several things I'm doing may be "evil" though, according to good style and Apple approval and whatnot. Have I done anything evil here? (For example, is it wrong not to provide setThings, since the things property is supposedly mutable?)

    Read the article

  • What's a good Java API for creating Word documents?

    - by Bill James
    I have a new app I'll be working on where I have to generate a Word document that contains tables, graphs, a table of contents and text. What's a good API to use for this? How sure are you that it supports graphs, ToCs, and tables? What are some hidden gotcha's in using them? Some clarifications: I can't output a PDF, they want a Word doc. They're using MS Word 2003 (or 2007), not OpenOffice Application is running on *nix app-server It'd be nice if I could start with a template doc and just fill in some spaces with tables, graphs, etc. Thanks for the help. Edit: Several good answers below, each with their own faults as far as my current situation. Hard to pick a "final answer" from them. Think I'll leave it open, and hope for better solutions to be created. Edit: The OpenOffice UNO project does seem to be closest to what I asked for. While POI is certainly more mainstream, it's too immature for what I want.

    Read the article

  • Silverlight Vs. WPF Vs. Winforms What is good for specifically my purpose?

    - by Cyril Gupta
    I am about to start a new Windows applications and the contenders for the platform are: Windows Forms WPF Silverlight Now my experience with WPF at least in my last application was not very encouraging (the app failed to run on the deployment machines and I had to re-do it in Winforms). So my confidence is shaken here. My app is for mass-distribution (the last version had some 100,000+ installations). So I want to make absolutely sure that my users will be able to use it and enjoy it without any problems. I would love to create a nice interface, going the next step like a Flex or Silverlight, iPhone app, with animations and effects. So I would really like to go with WPF or Silverlight if I can. My needs are Good support for visuals and animation effects. Support for database connectivity. Support for printing (Is there an equivalent of PrintDocument in Silverlight) Must not suffer from deployment troubles. Silverlight is universal, but does it have printing support and good controls toolset? WPF has printing support and a nice toolset, but can I depend on it? Winforms is dated already and is not so impressive, but should I go with it anyway? Your advice would be appreciated

    Read the article

  • What is a good programming language/environment for Linux database applications?

    - by Dkellygb
    I could use some advice on my move from the Windows world to Linux. For my business, I have used VB6 and Microsoft Access with both Access databases and SQL server in the past. The easy to use forms, report writers and programming language were perfect for CRUD apps and analysis for our small hotel/restaurant business. After using Linux at home for some time I would like to convert our small business. Our server is already a Linux box using Samba. I am happy with the OpenOffice.org applications instead of Microsoft Office. The only thing which is holding me back is a desktop database application where I can develop the forms and reports we require. Base does not seem to be up to the job yet from my experience. I would like something like VB.Net with visual studio (express) but I would like to avoid Mono – I just don’t see the point of it. (You can correct me if I’m wrong.) But a good collection of forms, controls and a good report writer would be ideal. I have looked at web based stuff like Ruby on Rails, but I think a webserver for our 5 pc network is overkill. I don’t mind running a proper database on our Ubuntu 9.10 server. I may have exposed a few prejudices above but my mind is open. Any thoughts?

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Is there "good" PRNG generating values without hidden state?

    - by actual
    I need some good pseudo random number generator that can be computed like a pure function from its previous output without any state hiding. Under "good" I mean: I must be able to parametrize generator in such way that running it for 2^n iterations with any parameters should cover all or almost all values between 0 and 2^n - 1, where n is the number of bits in output value. Combined generator output of n + p bits must cover all or almost all values between 0 and 2^(n + p) - 1 if I run it for 2^n iterations for every possible combination of its parameters, where p is the number of bits in parameters. For example, LCG can be computed like a pure function and it can meet first condition, but it can not meet second one. Say, we have 32-bit generator, m = 2^32 and it is constant, our p = 64 (two 32-bit parameters a and c), n + p = 96, so we must peek data by three ints from output to meet second condition. Unfortunately, condition can not be meet because of strictly alternating sequence of odd and even ints in output. To overcome this, hidden state must be introduced, but that makes function not pure and breaks first condition (period become much longer). Am I wanting too much?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >