Search Results

Search found 31038 results on 1242 pages for 'michael best'.

Page 72/1242 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • What to do before trying to benchmark

    - by user23950
    What are the things that I should do before trying to benchmark my computer: I've got this tools for benchmarking: 3dmark cinebench geekbench juarez dx10 open source mark Do I need to have a full spyware and virus scan before proceeding?What else should I do, in order to get accurate readings.

    Read the article

  • Security considerations in providing VPN access to non-company issued computers [migrated]

    - by DKNUCKLES
    There have been a few people at my office that have requested the installation of DropBox on their computers to synchronize files so they can work on them at home. I have always been wary about cloud computing, mainly because we are a Canadian company and enjoy the privacy and being outside the reach of the Patriot Act. The policy before I started was that employees with company issued notebooks could be issued a VPN account, and everyone else had to have a remote desktop connection. The theory behind this logic (as I understand it) was that we had the potential to lock down the notebooks whereas the employees home computers were outside of our grasp. We had no ability to ensure they weren't running as administrator all the time / were running AV so they were a higher risk at being infected with malware and could compromise network security. With the increase in people wanting DropBox I'm curious as to whether or not this policy is too restrictive and overly paranoid. Is it generally safe to provide VPN access to an employee without knowing what their computing environment looks like?

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • How to give write permissions to multiple users?

    - by Daniel Rikowski
    I have a web server and I'm uploading files using an FTP client. Because of that the owner and the group of the file are taken from the user used during the upload. Now I have to make this file writable by the web server (apache/apache). One way would be to just change the owner and the group of the uploaded file to apache/apache, but that way I cannot modify the file using the FTP account. Another way would be to give the file 777 permissions. Both approaches seem not very professional and a little bit risky. Are there any other options? In Windows I can just add another user to the file. Can something similar done with Linux?

    Read the article

  • Getting a Cross-Section from Two CSV Files

    - by Jonathan Sampson
    I have two CSV files that I am working with. One is massive, with about 200,000 rows. The other is much smaller, having about 12,000 rows. Both fit the same format of names, and email addresses (everything is legit here, no worries). Basically I'm trying to get only a subset of the second list by removing all values that presently exist in the larger file. So, List A has ~200k rows, and List B has ~12k. These lists overlap a bit, and I'd like to remove all entries from List B if they also exist in List A, leaving me with new and unique values only in List B. I've got a few tooks at my disposal that I can use. Open Office is loaded on this machine, along with MySQL (queries are alright). What's the easiest way to create a third CSV with the intersection of data?

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Best way to copy large amount of data between partitions

    - by skinp
    I'm looking to transfer data across 2 lv of an HP-UX server. I have a couple of those transfers to do, some of which are mostly binary (Oracle tablespace...) and some others are more text files (logs...). Used data size of the volumes is between 100Gb and 1Tb. Also, I will be changing the block size from 1K to 8K on some of these partitions... Things I'm looking for: Guarantees data integrity Fastest data transfer speed Keeps file ownership and permissions Right now, I've thought about dd, cp and rsync, but I'm not sure on the best one to use and the best way to use them...

    Read the article

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • running commands as other users - best method

    - by linuxrawkstar
    When running commands as other users from the command line, what is recommended best practice? In the past I've used sudo like so: sudo -u username command [args] I've been told (with no specific reasons why) that using sudo for this purpose is wrong. I'd like to know why. Is there some "best way" to accomplish this? For example, I've also used the su command like so: su username - -c "command [args]" I can't imagine why either of these methods would be "bad". Your thoughts?

    Read the article

  • Suspicious activity in access logs - someone trying to find phpmyadmin dir - should I worry?

    - by undefined
    I was looking over the access logs for a server that we are running on Amazon Web Services. I noticed that someone was obviously trying to find the phpmyadmin directory - they (or a bot) were trying different paths eg - admin/phpmyadmin/, db_admin, ... and the list goes on. Actually there isnt a database on this server and so this was not a problem, they were never going to find it, but should I be worried about such snooping? Is this just a really basic attempt at getting in to our system? Actually our database is held on another managed server which I assume is protected from such intrusions. What are your views on such sneaky activity?

    Read the article

  • Best Linux Distribution [closed]

    - by kamalbhai
    hi I am right now on Windows 7 alongwith a newly bought Dell Laptop .I want to install Linux too . I have been using Ubuntu 10.10 before . now I want to try a different flavour in Linux which has a good audio/video options & is security enhanced . Right now I have the following distributions : Ubuntu 10.10 OpenSuse 11.0 Fedora 13 . among the three mentioned above which might be the best to learn out things n get more close to linux .I am a student & eager to learn a lot of new things .... so which of the above would be the best for me ?

    Read the article

  • How do you keep your business rules DRY?

    - by Mario
    I periodically ponder how to best design an application whose every business rule exists in just a single location. (While I know there is no proverbial “best way” and that designs are situational, people must have a leaning toward one practice or another.) I work for a shop where they prefer to house as much of the business rules as possible in the database. This requires developers in many cases to perform identical front-end validations to avoid sending data to the database that will result in an exception—not very DRY. It grates me anytime I find myself duplicating any kind of logic—even lowly validation logic. I am a single-point-of-truth purist to an anal degree. On the other end of the spectrum, I know of shops that create dumb databases (the Rails community leans in this direction) and handle all of the business logic in a separate tier (in Rails the models would house “most” of this). Note the word “most” which implies that some business logic does end up spilling into other places (in Rails it might spill over into the controllers). In way, a clean separation of concerns where all business logic exists in a single core location is a Utopian fantasy that’s hard to uphold (n-tiered architecture or not). Furthermore, is see the “Database as a fortress” and would agree that it should be built on constraints that cause it to reject bad data. As such, I hold principles that cause a degree of angst as I attempt to balance them. How do you balance the database-as-a-fortress view with the desire to have a single-point-of-truth?

    Read the article

  • Correctly use dependency injection

    - by Rune
    Me and two other colleagues are trying to understand how to best design a program. For example, I have an interface ISoda and multiple classes that implement that interface like Coke, Pepsi, DrPepper, etc.... My colleague is saying that it's best to put these items into a database like a key/value pair. For example: Key | Name -------------------------------------- Coke | my.namespace.Coke, MyAssembly Pepsi | my.namespace.Pepsi, MyAssembly DrPepper | my.namespace.DrPepper, MyAssembly ... then have XML configuration files that map the input to the correct key, query the database for the key, then create the object. I don't have any specific reasons, but I just feel that this is a bad design, but I don't know what to say or how to correctly argue against it. My second colleague is suggesting that we micro-manage each of these classes. So basically the input would go through a switch statement, something similiar to this: ISoda soda; switch (input) { case "Coke": soda = new Coke(); break; case "Pepsi": soda = new Pepsi(); break; case "DrPepper": soda = new DrPepper(); break; } This seems a little better to me, but I still think there is a better way to do it. I've been reading up on IoC containers the last few days and it seems like a good solution. However, I'm still very new to dependency injection and IoC containers, so I don't know how to correctly argue for it. Or maybe I'm the wrong one and there's a better way to do it? If so, can someone suggest a better method? What kind of arguments can I bring to the table to convince my colleagues to try another method? What are the pros/cons? Why should we do it one way? Unfortunately, my colleagues are very resistant to change so I'm trying to figure out how I can convince them.

    Read the article

  • Reading component parameters and setting defaults

    - by donut
    I'm pulling my hair on this because it should be simple, but can't get it to do the right thing. What's the best practice way of doing the following I've created a custom component that extends <s:Label> so now the Label has additional properties color2 color3 value which can be called like this and are used in the skin. <s:CustomLabel text="Some text" color2="0x939202" color3="0x999999" value="4.5" /> If I don't supply these parameters, I'd like some defaults to be set. So far I've had some success, but it doesn't work 100% of the time, which leads me to think that I'm not following best practices when setting those defaults. I do it now like this: [Bindable] private var myColor2:uint = 0x000000; [Bindable] private var myColor3:uint = 0x000000; [Bindable] private var myValue:Number = 10.0; then in the init function, I do a variation of this to set the default myValue = hostComponent.value; myValue = (hostComponent.value) ? hostComponent.value : 4.5; Sometimes it works, sometimes it doesn't, depending on the type of variable I'm trying to set. I eventually decided to read them as Strings then convert them to the desired type, but it seems that this also works half the time.

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • Rails: Create method available in all views and all models

    - by smotchkkiss
    I'd like to define a method that is available in both my views and my models Say I have a view helper: def foo(s) "hello #{s}" end A view might use the helper like this: <div class="data"><%= foo(@user.name) %></div> However, this <div> will be updated with a repeating ajax call. I'm using a to_json call in a controller returns data like so: render :text => @item.to_json(:only => [...], :methods => [:foo]) This means, that I have to have foo defined in my Item model as well: class Item def foo "hello #{name}" end end It'd be nice if I could have a DRY method that could be shared in both my views and my models. Usage might look like this: Helper def say_hello(s) "hello #{s}" end User.rb model def foo say_hello(name) end Item.rb model def foo say_hello(label) end View <div class="data"><%= item.foo %></div> Controller def observe @items = item.find(...) render :text => @items.to_json(:only=>[...], :methods=>[:foo]) end IF I'M DUMB, please let me know. I don't know the best way to handle this, but I don't want to completely go against best-practices here. If you can think of a better way, I'm eager to learn!

    Read the article

  • How to Manage CSS Explosion

    - by Jason
    I have been heavily relying on CSS for a website that I am working on (currently, everything is done as property values within each tag on the website and I'm trying to get away from that to make updates significantly easier). The problem I am running into, is I'm starting to get a bit of "CSS explosion" going on. It is becoming difficult for me to decide how to best organize and abstract data within the CSS file. For example: I am using a large number of div tags within the website (previously it was completely tables based). So I'm starting to get a lot of CSS that looks like this... div.title { background-color: Blue; color: White; text-align: center; } div.footer { /* Stuff Here */ } div.body { /* Stuff Here */ } etc. It's not too bad yet, but since I am learning here, I was wondering if recommendations could be made on how best to organize the various parts of a CSS file. What I don't want to get to is where I have a separate CSS attribute for every single thing on my website (which I have seen happen), and I always want the CSS file to be fairly intuitive. (P.S. I do realize this is a very generic, high-level question. My ultimate goal is to make it easy to use the CSS files and demonstrate their power to increase the speed of web development so other individuals that may work on this site in the future will also get into the practice of using them rather than hard-coding values everywhere.)

    Read the article

  • How large a role does subjectiveness play in programming?

    - by Bob
    I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP. Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing. What role does subjectiveness play within the programming realm? Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting. The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance. But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought. It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work. Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • Checking for nil in view in Ruby on Rails

    - by seaneshbaugh
    I've been working with Rails for a while now and one thing I find myself constantly doing is checking to see if some attribute or object is nil in my view code before I display it. I'm starting to wonder if this is always the best idea. My rationale so far has been that since my application(s) rely on user input unexpected things can occur. If I've learned one thing from programming in general it's that users inputting things the programmer didn't think of is one of the biggest sources of run-time errors. By checking for nil values I'm hoping to sidestep that and have my views gracefully handle the problem. The thing is though I typically for various reasons have similar nil or invalid value checks in either my model or controller code. I wouldn't call it code duplication in the strictest sense, but it just doesn't seem very DRY. If I've already checked for nil objects in my controller is it okay if my view just assumes the object truly isn't nil? For attributes that can be nil that are displayed it makes sense to me to check every time, but for the objects themselves I'm not sure what is the best practice. Here's a simplified, but typical example of what I'm talking about: controller code def show @item = Item.find_by_id(params[:id]) @folders = Folder.find(:all, :order => 'display_order') if @item == nil or @item.folder == nil redirect_to(root_url) and return end end view code <% if @item != nil %> display the item's attributes here <% if @item.folder != nil %> <%= link_to @item.folder.name, folder_path(@item.folder) %> <% end %> <% else %> Oops! Looks like something went horribly wrong! <% end %> Is this a good idea or is it just silly?

    Read the article

  • How to check for mip-map availability in OpenGL?

    - by Xavier Ho
    Recently I bumped into a problem where my OpenGL program would not render textures correctly on a 2-year-old Lenovo laptop with an nVidia Quadro 140 card. It runs OpenGL 2.1.2, and GLSL 1.20, but when I turned on mip-mapping, the whole screen is black, with no warnings or errors. This is my texture filter code: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); After 40 minutes of fiddling around, I found out mip-mapping was the problem. Turning it off fixed it: // glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); I get a lot of aliasing, but at least the program is visible and runs fine. Finally, two questions: What's the best or standard way to check if mip-mapping is available on a machine, aside from checking OpenGL versions? If mip-mapping is not available, what's the best work-around to avoid aliasing?

    Read the article

  • Have you dealt with space hardening?

    - by Tim Post
    I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. Can anyone recommend some books, offer links to papers on the topic or (gasp) even a simulator that shows you what happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >