Search Results

Search found 35442 results on 1418 pages for 'best approach'.

Page 15/1418 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Best practice question

    - by sid_com
    Hello! Which version would you prefer? #!/usr/bin/env perl use warnings; use strict; use 5.010; my $p = 7; # 33 my $prompt = ' : '; my $key = 'very important text'; my $value = 'Hello, World!'; my $length = length $key . $prompt; $p -= $length; Option 1: $key = $key . ' ' x $p . $prompt; Option 2: if ( $p > 0 ) { $key = $key . ' ' x $p . $prompt; } else { $key = $key . $prompt; } say "$key$value"

    Read the article

  • Localization in php, best practice or approach?

    - by sree
    I am Localizing my php application. I have a dilemma on choosing best method to accomplish the same. Method 1: Currently am storing words to be localized in an array in a php file <?php $values = array ( 'welcome' => 'bienvenida' ); ?> I am using a function to extract and return each word according to requirement Method 2: Should I use a txt file that stores string of the same? <?php $welcome = 'bienvenida'; ?> My question is which is a better method, in terms of speed and effort to develop the same and why? Edit: I would like to know which method out of two is faster in responding and why would that be? also, any improvement on the above code would be appreciated!!

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • Live search/filter as you type in client approach

    - by Pinoniq
    As an exercise for myself to practice my JavaScript "skills" I'm trying to write a client-side filter. It should be able to filter "content blocks" as the user types. By "content block", I mean a list of DomElements that each contain at least one text node - it is possible that they contain more, and even a different amount of text nodes, nested inside other nodes, etc. I've thought of 2 approaches: On page initialization, scan all nodes and store all the text in some kind of Map or a tree. Simply iterate over every item and check whether it has the string to search/filter for. One could add performance here by caching, only filtering the current remaining items if text is added, etc. Obviously, if the number of nodes is really big, option 1 will take a while to build the 'index' but it will perform faster once it is built. Option 2 however will be available right on page load since no initialization is performed. But of course it will take longer to search. So my question is: what is the best approach here? And how would one implement 'caching' and/or 'index'?

    Read the article

  • The 50 Best Ways to Disable Built-in Windows Features You Don’t Want

    - by The Geek
    Over the years, we’ve written about loads of ways to disable features, tweak things that don’t work the way you want, and remove other things entirely. Here’s the list of the 50 best ways to do just that. Just in case you missed some of our recent roundup articles, here’s a couple of roundups of our very best articles for you to check out: The 50 Best Registry Hacks that Make Windows Better The 20 Best How-To Geek Explainer Topics for 2010 The 20 Best Windows Tweaks that Still Work in Windows 7 The 50 Best How-To Geek Windows Articles of 2010 The 10 Cleverest Ways to Use Linux to Fix Your Windows PC If you’ve already been through those, keep reading for how to disable loads of Windows features you might not want Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Calvin and Hobbes Mix It Up in this Fight Club Parody [Video] Choose from 124 Awesome HTML5 Games to Play at Mozilla Labs Game On Gallery Google Translate for Android Updates to Include Conversation Mode and More Move Your Photoshop Scratch Disk for Improved Performance Winter Storm Clouds on the Horizon Wallpaper Existential Angry Birds [Video]

    Read the article

  • The Minimalist Approach to Content Governance - Retire Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. Good news - the Retire Phase is actually more fun than the Manage Phase. During the Retire Phase our content management team should not have to track down content creators if the Request Phase of this process was completed successfully. The ownership meta data, success criteria and time stamp that was applied to the original content submission will help to manage content at the end of the content life cycle. The Retire Phase will provide the opportunity for us to prune irrelevant content items through archiving or deletion, keeping the content system clear of irrelevant information, streamlining users ability to browse and search for content.   1. Act on Metrics Established during the Request Phase Why - Some information is only relevant for a given amount of time. In Content Platform Migration Strategy - Artifacts vs Perishable Content we examined two content types - Artifacts and Perishable content. Understanding the differences between Artifacts and Perishable content will allow us to explicitly respect their various lifespans. Additionally, some content may have been part of a project that failed to meet the success criteria outlined in the Request Phase. Any content that did not meet the metrics outlined in the Request Phase should be considered for deletion. How - Thankfully by adhering to to The Minimalist Approach to Content Governance our content should have some level of meta data associated with it that will allow us to quickly sort and understand how to deal with it. Content Management Systems like Oracle's Universal Content Management (UCM) natively allow you to create and save advanced searches that can use content meta data like folders, author, expiration date, security settings and custom meta data to pull back listings of content for examination. Additionally, analytics are available for all content items that allow us to determine if the usage is meeting success criteria that may have been previously outlined during the request phase. The lists that are produced from these approaches can be quickly reviewed for each project with the content owners and based on the nature of the content and success criteria undergo archiving or deletion. Impact - Retiring content that is no longer relevant will allow end users to have fast and relevant access to information across your enterprise. As we mentioned in our first post in this series - it is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year. The light level of effort that was placed into the Request Phase of this process will set us up to keep content clean and relevant for a long time to come. With an up-to-date content repository users will be able to quickly find access to the information that is critical to their work processes. You might not get a holiday named in your honor managing the content system, but will appreciate their quick access to quality information.

    Read the article

  • PHP Browser Game Question - Pretty General Language Suitability and Approach Question

    - by JimBadger
    I'm developing a browser game, using PHP, but I'm unsure if the way I'm going about doing it is to be encouraged anymore. It's basically one of those MMOs where you level up various buildings and what have you, but, you then commit some abstract fighting entity that the game gives you, to an automated battle with another player (producing a textual, but hopefully amusing and varied combat report). Basically, as soon as two players agree to fight, PHP functions on the "fight.php" page run queries against a huge MySQL database, looking up all sorts of complicated fight moves and outcomes. There are about three hundred thousand combinations of combat stance, attack, move and defensive stances, so obviously this is quite a resource hungry process, and, on the super cheapo hosted server I'm using for development, it rapidly runs out of memory. The PHP script for the fight logic currently has about a thousand lines of code in it, and I'd say it's about half-finished as I try to add a bit of AI into the fight script. Is there a better way to do something this massive than simply having some functions in a PHP file calling the MySQL Database? I taught myself a modicum of PHP a while ago, and most of the stuff I read online (ages ago) about similar games was all PHP-based. but a) am I right to be using PHP at all, and b) am I missing some clever way of doing things that will somehow reduce server resource requirements? I'd consider non PHP alternatives but, if PHP is suitable, I'd rather stick to that, so there's no overhead of learning something new. I think I'd bite that bullet if it's the best option for a better game, though.

    Read the article

  • Software Design for Product Verticals and Service Verticals

    - by Rachel
    In every industry there are two verticals Product Vertical and Service Vertical, so my question is: How does design approach changes while designing Software for Product Vertical as compared to developing Software for Service Vertical ? What are the pros and cons for each case ? Also, in case of Product Vertical, How you go about designing Product or Features and what are steps involved ? Lastly, I was reading How Facebook Ships Code article and it appears that Product Managers have very little influence on how Product is developed and responsibility lies mainly with the Developer for the feature. So is this good practice and why one would go for this approach ? What would be your comment on this kind of approach ?

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • How can I improve my Animation

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. Here is an example of a very basic jump I implemented. if(jumping) { height = sin(time); if(height < 0) jumping = false; // player landed player.position.z = height; } if(keydown(SPACE) && !jumping) { jumping = true; time = now(); // store the starting time } So my player jumped in a perfect sine function. That seems quite natural, because he slows down when he reached the top position, and in the fall he speeds up again. But patching every animation out of sine and cosine is stretched to its limits soon. So can I improve my animation and provide a more abstract layer?

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • In concept how is Animation done?

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. As a jump a perfect sine function is acceptable but for motions of arms, weapons or face it would look quite unnatural. Moreover patching every animation out of sine and cosine is stretched to its limits soon. I head of skeletons and rigging already. Although I could not implement skeletal animations I can't imagine that quite natural animations in major games are made of static predefined motion states. So how in general is animation done today?

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • What are industry standards and professional best practices in network hosts naming? [closed]

    - by Ivan
    Possible Duplicate: Naming convention for computers It seems an important and difficult dilemma for me how to name network hosts (routers, servers (while a server can be a router and host diverse services at the same time), virtual machines (while they host important services and can migrate), workstations and notebooks (using pc-username is not the best idea as users may change), printers & MFUs, surveillance IP cameras, etc). Are there known and accepted best practices for this task? Excuse me if there already was a similar question here (I think it probably was), I haven't found it.

    Read the article

  • How to change hardcoded database paths in a Lotus Approach Database

    - by asoltys
    One of our Lotus Approach files (file extension .apr) has hardcoded paths to additional underlying database files (file extension .dbf). We've moved the Approach database and all the underlying files to a new network drive, so these paths need to be updated. You can see the old paths by opening the File menu and selecting "Approach File Properties", under the "Databases" heading. How can you change these paths?

    Read the article

  • A*, Tile costs and heuristic; How to approach

    - by Kevin Toet
    I'm doing exercises in tile games and AI to improve my programming. I've written a highly unoptimised pathfinder that does the trick and a simple tile class. The first problem i ran into was that the heuristic was rounded to int's which resulted in very straight paths. Resorting a Euclidian Heuristic seemed to fixed it as opposed to use the Manhattan approach. The 2nd problem I ran into was when i tried added tile costs. I was hoping to use the value's of the flags that i set on the tiles but the value's were too small to make the pathfinder consider them a huge obstacle so i increased their value's but that breaks the flags a certain way and no paths were found anymore. So my questions, before posting the code, are: What am I doing wrong that the Manhatten heuristic isnt working? What ways can I store the tile costs? I was hoping to (ab)use the enum flags for this The path finder isnt considering the chance that no path is available, how do i check this? Any code optimisations are welcome as I'd love to improve my coding. public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map ) { return FindPath( startTile, endTile, map, TileFlags.WALKABLE ); } public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map, TileFlags acceptedFlags ) { List<Tile> open = new List<Tile>(); List<Tile> closed = new List<Tile>(); open.Add( startTile ); Tile tileToCheck; do { tileToCheck = open[0]; closed.Add( tileToCheck ); open.Remove( tileToCheck ); for( int i = 0; i < tileToCheck.neighbors.Count; i++ ) { Tile tile = tileToCheck.neighbors[ i ]; //has the node been processed if( !closed.Contains( tile ) && ( tile.flags & acceptedFlags ) != 0 ) { //Not in the open list? if( !open.Contains( tile ) ) { //Set G int G = 10; G += tileToCheck.G; //Set Parent tile.parentX = tileToCheck.x; tile.parentY = tileToCheck.y; tile.G = G; //tile.H = Math.Abs(endTile.x - tile.x ) + Math.Abs( endTile.y - tile.y ) * 10; //TODO omg wtf and other incredible stories tile.H = Vector2.Distance( new Vector2( tile.x, tile.y ), new Vector2(endTile.x, endTile.y) ); tile.Cost = tile.G + tile.H + (int)tile.flags; //Calculate H; Manhattan style open.Add( tile ); } //Update the cost if it is else { int G = 10;//cost of going to non-diagonal tiles G += map[ tile.parentX, tile.parentY ].G; //If this path is shorter (G cost is lower) then change //the parent cell, G cost and F cost. if ( G < tile.G ) //if G cost is less, { tile.parentX = tileToCheck.x; //change the square's parent tile.parentY = tileToCheck.y; tile.G = G;//change the G cost tile.Cost = tile.G + tile.H + (int)tile.flags; // add terrain cost } } } } //Sort costs open = open.OrderBy( o => o.Cost).ToList(); } while( tileToCheck != endTile ); closed.Reverse(); List<Tile> validRoute = new List<Tile>(); Tile currentTile = closed[ 0 ]; validRoute.Add( currentTile ); do { //Look up the parent of the current cell. currentTile = map[ currentTile.parentX, currentTile.parentY ]; currentTile.renderer.material.color = Color.green; //Add tile to list validRoute.Add( currentTile ); } while ( currentTile != startTile ); validRoute.Reverse(); return validRoute; } And my Tile class: [Flags] public enum TileFlags: int { NONE = 0, DIRT = 1, STONE = 2, WATER = 4, BUILDING = 8, //handy WALKABLE = DIRT | STONE | NONE, endofenum } public class Tile : MonoBehaviour { //Tile Properties public int x, y; public TileFlags flags = TileFlags.DIRT; public Transform cachedTransform; //A* properties public int parentX, parentY; public int G; public float Cost; public float H; public List<Tile> neighbors = new List<Tile>(); void Awake() { cachedTransform = transform; } }

    Read the article

  • Cox Communications' Strategic Approach to Enterprise User Experience: How Change Management and Usab

    - by Applications User Experience
    Author: Anna Wichansky, Senior Director, Applications User Experience, and Chair, Oracle Usability Advisory Board As part of our work in the User Experience group, our teams often go to Customer events such as the Higher Education User Group (HEUG) conference, Alliance 2010. This year's event was held in San Antonio, Texas, and was attended by hundreds of higher education, government, and public sector users of Oracle applications. The User Assistance team used this opportunity to reach out to customers in the Educational and Government sectors to better understand how their organizations are currently approaching help, messages, and other forms of user assistance. What is User Assistance? For us, user assistance is more than the old books of users' manuals and documentation. User assistance is anything that helps users get their jobs done quickly and efficiently. Instead of expecting users to stop and look through a guide or manual, we have been developing solutions that are embedded within the interface. We know that when people are having difficulty with a task, they want to be able to search efficiently for solutions and collaborate with coworkers. We know that they want to find their answers right there, right then, so that they can get on with their work. In our interviews at Alliance, we wanted to learn what the participants could tell us about what was happening on their campuses and in their institutions. Figure 1. For Oracle User Assistance, it's not just about books any more. So what did we do? Off to Texas, we recruited 10 people from nine different government and education organizations to come to our Oracle User Experience Onsite Usability Labs. We conducted one-hour interviews with these folks and asked them all about User Assistance--what people are doing, what they would like to do, what technologies they are using, what they would like to use, and ultimately what should we as a company be planning for our future products. We used this as an opportunity also to show them some of our design concepts for Fusion User Assistance, our next generation of user assistance based on the best of our user assistance in other products. Figure 2. Interviewing a technical user at Alliance. What we learned... People are not using paper or online manuals anymore. They don't want to see a manual that is written for technical users and that doesn't make sense to the ordinary end user. They really don't want to have to flip through a manual trying to find an answer to their question. Even when the answer might be tailored to their organization, they don't want to dig through documentation. When they need an answer now, they don't have the patience to dig for something that might or might not be clearly written. What does it mean to an organization when users don't want to deal with documentation? In many cases, it means that frustrated users make phone calls to try to find the answers that they need immediately. Phone calls are expensive to an organization and frustrating to the technical support staff who have provided documentation that no one wants to read anymore. If they don't call, they email for help often, and many users are asking for the same information. The bottom line is that if they could get that help immediately in the interface, they wouldn't have to make those calls or send those emails -- and that saves time and money. Our Fusion User Assistance options to customize help and get help for the task immediately were seen as an opportunity by these technical users to build the solutions that their users need and want. Figure 3. Joyce Ohgi and Laurie Pattison of Applications UX. Chicken Fried Steak. That was huge. But then, this was Texas, where we discovered a lot of things come very big. Drinks are served in quart-size glasses and dishes like Chicken Fried Steaks are served on platters not plates. We saw three-pound cinnamon rolls that you down with tea sweet enough to curl your hair. Deep in the heart of Texas, we learned a lot, and we ate even more.

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Design best practice - best way to handle user selection

    - by user1457227
    I'm an experienced developer (WPF) moving over to Android development. My question: an app I am developing allows the user to browse their local storage (such as SDCARD) and select a file. Now, should I simply create a new Activity (after the user has made a selection) to handle what I want to have the app do with that chosen file, -or- is the better approach to pass the path/name of the selected file back to the main Activity and let IT launch the next Activity? In other words, is the better practice to have the main Activity launch other (support) activities, or is it perfectly ok and normal to have one activity chain to another and on and on? Thanks!

    Read the article

  • Best Practices to Accelerate Oracle VM Server Deployments

    - by Honglin Su
    IOUG (Independent Oracle User Group) Virtualization SIG is hosting the webcast on the best practices of Oracle VM server virtualization. July 11, 2012 - Best Practices to Accelerate Oracle VM Server on SPARC Deployments. Register here. To learn the best practices on Oracle VM Server for x86,  watch the session replay here. For more white paper about best practices, visit Oracle VM OTN page here.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >