Search Results

Search found 2210 results on 89 pages for 'techniques'.

Page 17/89 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • Where to find clients who are willing to pay top dollar for highly reliable code?

    - by Robin Green
    I'm looking to find clients who are willing to pay a premium above usual contractor rates, for software that is developed with advanced tools and techniques to eliminate certain classes of bugs. However, I have little experience of contracting, and relatively few contacts. It's important to state that the kind of tools and techniques I'm thinking of (e.g. formal verification) are used commercially extremely rarely, as far as I'm aware. There is kind of a continuum of approaches to higher reliability, with basic testing and basic static typing at one end and full-blown formal verification at the other, but the methods I'm thinking of are towards the latter end of the spectrum.

    Read the article

  • What happened to .fx files in D3D11?

    - by bobobobo
    It seems they completely ruined .fx file loading / parsing in D3D11. In D3D9, loading an entire effect file was D3DXCreateEffectFromFile( .. ), and you got a ID3DXEffect9, which had great methods like SetTechnique and BeginPass, making it easy to load and execute a shader with multiple techniques. Is this completely manual now in D3D11? The highest level functionality I can find is loading a SINGLE shader from an FX file using D3DX11CompileFromFile. Does anyone know if there's an easier way to load FX files and choose a technique? With the level of functionality provided in D3D11 now, it seems like you're better off just writing .hlsl files and forgetting about the whole idea of Techniques.

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Microsoft Access 2010: How to Customize Form Settings

    Since users have varying situations and needs when it comes to storing data, Microsoft equipped Access 2010 with capabilities that allow you to adjust specific settings for forms you want to create. These settings adjustments take place via the handy Property Sheet. That is where our focus will be for this tutorial, so let's get started and begin customizing some form settings. Keep in mind, we are using a distinct sample for this tutorial, so just follow along as best as you can to see how the techniques are applied. You can then copy these simple techniques to your own samples to put them...

    Read the article

  • what knowledge would I need to make a good simulation games

    - by Skeith
    I have an idea for a game like theme park but don't know how simulation games are made. I am not some noob on his first game so I appreciated constructive answers instead of "its hard, don't do it". What I want is to know how simulation game mechanics are put together. I figure it would be heaver on the AI than normal games and not knowing much about AI would like to know some programming techniques I should look into for this style game. specific techniques please not just a book on ai. what sort of architecture would be used? I guess it would have some sort of probability engine with pre designed events that are triggered based on the AI state. Would it use a FSM or be purely event driven ? Any information on how a sims game functions would be cool.

    Read the article

  • How to document and teach others "optimized beyond recognition" computationally intensive code?

    - by rwong
    Occasionally there is the 1% of code that is computationally intensive enough that needs the heaviest kind of low-level optimization. Examples are video processing, image processing, and all kinds of signal processing, in general. The goals are to document, and to teach the optimization techniques, so that the code does not become unmaintainable and prone to removal by newer developers. (*) (*) Notwithstanding the possibility that the particular optimization is completely useless in some unforeseeable future CPUs, such that the code will be deleted anyway. Considering that software offerings (commercial or open-source) retain their competitive advantage by having the fastest code and making use of the newest CPU architecture, software writers often need to tweak their code to make it run faster while getting the same output for a certain task, whlist tolerating a small amount of rounding errors. Typically, a software writer can keep many versions of a function as a documentation of each optimization / algorithm rewrite that takes place. How does one make these versions available for others to study their optimization techniques?

    Read the article

  • Flowchart for solving programming problems

    - by nurne
    I noticed that every developer implements a somewhat different flowchart for solving programming problems. By flowchart I mean a defined system of techniques that the developer goes through in a certain sequence, trying to solve the problem at hand. Some examples for techniques: Google "how to..." or "... tutorial". Search the java/msdn/apple/etc API doc for the specific class or method. Search in stack overflow the exact problem with some tags like [iphone]/[java] etc. Take a nap and let the subconscious work. Debug. Draw the algorithm or system. Google the logged error message. Ask a colleague or manager. Ask a new question in stack overflow. From your experience, what is the best flowchart for solving a programming problem?

    Read the article

  • Google I/O 2012 - Making Good Apps Great: More Advanced Topics for Expert Android Developers

    Google I/O 2012 - Making Good Apps Great: More Advanced Topics for Expert Android Developers Reto Meier In a follow-up to last year's session, I'll demonstrate how to use advanced Android techniques to take a good app and transform it into a polished product, without being a resource hog. Features advanced coding tips and tricks, bandwidth-saving techniques, implementation patterns, exposure to some of the lesser-known API features, and insight into how to minimize battery drain by ensuring your app is a good citizen on the carrier network. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 2193 69 ratings Time: 58:35 More in Science & Technology

    Read the article

  • When do one give up on programming challenges to look at the solutions?

    - by snowpolar
    Recently, I have been trying to learn programming and improve my ability in writing method level code through practices on websites such as Codingbat.com However in the recent weeks I have been stuck for weeks at the last 2-3 questions of String-2/Array-2 and early String/Array-3 problems. It feels really tempting for me to give up and google online for the solutions, but I'm afraid that by doing so I may end up not improving my ability at all. I wonder if this is common and when faced with such situations how long do 1 wait before giving up to look at the solutions or to continue spending more weeks on trying to solve the problems by yourself? How do 1 really engage in effective deliberate practice to improve programming ability and attain the necessary problem solving techniques? Any formal techniques available to tackle the never seen before problems?

    Read the article

  • What is the proper way to create a cross-fade effect? [closed]

    - by Starx
    When creating an image slider, using a cross fade is one of more popular effects. Various sliders use differing techniques to create such an effect. Two techniques I've found so far are: Use an overlay and underlay <div> and fade in and out each other's visibility. Create a <div> matching the exact size of the slider during initialization, play with its z-index property, and then fade each other. Is there a better way to create this effect?

    Read the article

  • Getting started with object detection - Image segmentation algorithm

    - by Dev Kanchen
    Just getting started on a hobby object-detection project. My aim is to understand the underlying algorithms and to this end the overall accuracy of the results is (currently) more important than actual run-time. I'm starting with trying to find a good image segmentation algorithm that provide a good jump-off point for the object detection phase. The target images would be "real-world" scenes. I found two techniques which mirrored my thoughts on how to go about this: Graph-based Image Segmentation: http://www.cs.cornell.edu/~dph/papers/seg-ijcv.pdf Contour and Texture Analysis for Image Segmentation: http://www.eng.utah.edu/~bresee/compvision/files/MalikBLS.pdf The first one was really intuitive to understand and seems simple enough to implement, while the second was closer to my initial thoughts on how to go about this (combine color/intensity and texture information to find regions). But it's an order of magnitude more complex (at least for me). My question is - are there any other algorithms I should be looking at that provide the kind of results that these two, specific papers have arrived at. Are there updated versions of these techniques already floating around. Like I mentioned earlier, the goal is relative accuracy of image segmentation (with an eventual aim to achieve a degree of accuracy of object detection) over runtime, with the algorithm being able to segment an image into "naturally" or perceptually important components, as these two algorithms do (each to varying extents). Thanks! P.S.1: I found these two papers after a couple of days of refining my search terms and learning new ones relevant to the exact kind of techniques I was looking for. :) I have just about reached the end of my personal Google creativity, which is why I am finally here! Thanks for the help. P.S.2: I couldn't find good tags for this question. If some relevant ones exist, @mods please add them. P.S.3: I do not know if this is a better fit for cstheory.stackexchange (or even cs.stackexchange). I looked but cstheory seems more appropriate for intricate algorithmic discussions than a broad question like this. Also, I couldn't find any relevant tags there either! But please do move if appropriate.

    Read the article

  • Should i continue my self-taught coding practice or learn how to do coding professionally?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • +2,4% de spams en février en France, soit 83,9% des emails échangés, d'après le dernier rapport mensuel Symantec

    +2,4% de spams en février en France, soit 83,9% des emails échangés, d'après le dernier rapport mensuel Symantec Symantec vient de dévoiler les résultats de son rapport mensuel de sécurité MessageLabs Intelligence. Depuis la fin janvier 2011, MessageLabs Intelligence a observé d'importants volumes d'attaques collaboratives qui utilisent des techniques précises et très ciblées : différentes familles de malwares ont été utilisées de manière très agressive pour mener des attaques simultanées via l'utilisation de techniques de propagation, renforçant la probabilité d'une origine commune pour ces emails infectés. L'analyse de Symantec révèle qu'en février 2011, 1 email sur 290,1 contenait un maliciel, contre 1 sur 364,8...

    Read the article

  • Which optional features would you recommend for a raytracer? [closed]

    - by locks
    I'm developing a basic triangle mesh raytracer on a short deadline. This means I can't implement every feature I come across, so I'm looking for some feedback about which features you think are most important, taking into consideration the performance of the feature and how much punch it packs. I'm especially looking for optimization techniques that allow for a faster rendering and simple techniques that make a big impact on the final scene quality. Is there any chance of making it fast enough to run in realtime? Here are some example of features I've read about: Anti-aliasing Bounding box Sky box

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >