Search Results

Search found 1797 results on 72 pages for 'bandwidth measuring'.

Page 37/72 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How should I measure Concurrent Licence Usage

    - by Andrew Wood
    Hi I have detailed stats on user access to my system detailing login and logout times as well as machine used, network username etc. I am in need of measuring what I would term a concurrent user licences level based on this information. Now I could take the maximum logged in for any 1 day in a 3 month period say 170 or I could take the average say 133. Does anyone have or know of a formula for working this out or is it as simple as the high water mark which is 170 in my example. A client has recently gone from an unlimited licence to a concurrent licence so I am faced with the task of setting the initial licence level. There is potential for more licence sales in the future so I don't want it set to high and I do want it based on historical data that the system collects rather than guess work.

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • C#. Whats the fastest way to make an integer positive

    - by maxima120
    I asked wrong question previously and was swamped with negative votes... Let me try again... What is absolutely fastest way to make an int positive (given 50/50 distribution of pos/neg over time). To be nominated for an answer I will require MSIL analysis and not a guess or measuring of time with granny's watch... P.S. as one of variations I proposed i * i not because I wanted to do Sqrt(i * i) afterwards but because i will be used only once to be compared to a const. And if i * i will win competition I simply multiply the const.. Hence the following solution is valid: int trigger = realTrigger * realTrigger; i = SomeCalcs(); i = i * i; if(i < trigger) DoSomething(); P.P.S. pointless rant is not acceptable.. like: why do you need this, its BS! C# cannot tolerate developers like you!

    Read the article

  • PHP Thumbnail image generator works, but consistently awaits 90% of its loadtime! Why?

    - by Sam
    Hi folks, This is the follow-up question from this general page load speed question. This question which will focus only, entirely and specifically on the thumbnail generator PHP code with the question: What's causing the 97% delays on these tiny thumbnails? Measuring only 3 ~ 5 kb each, this is strange indeed! Even after 7 orso bounty-rounds and 100 upvotes, the images are still doing nothing for over 97% of the time. It works allright, but it could be so much faster! ZAM, my pet robot, was so annoyed by not knowing why, he just pulled out some cords! Silly metal box... (I have asked permission from the original author, to put temporarily segments of the code online that I think could be the problem, after which I will remove all code, except the several lines that were the direct cause of this issue here to match the answer. This will be the free opportunity to perfect/speedup his otherwise already to my opinion vert versatile and userfriendly thumbnail generator.)

    Read the article

  • Running out of memory but not seeing excessive object allocation in Instruments

    - by Scotty Allen
    I have an iPad app that's crashing due to low memory. However, Instruments doesn't show any significant amount of memory allocated using ObjectAlloc - it stays under 1MB for the lifetime of the application. Leaks shows less than 1kB leaked over the course of the run. Memory monitor shows the free memory on the devices drop significantly with use, eventually dropping to the point that it's out of memory. Here's a screenshot from Instruments: I'm totally stumped. As far as I can tell, this basically says that as far as my app is concerned, I'm never using more than about 750kB, but that the device is still running out of physical memory, which is causing my app to crash/force exit. I'm new to debugging memory issues with XCode. Am I measuring this wrong? Is there another way to see where this memory is going?

    Read the article

  • .NET --- Textbox control - wait till user is done typing

    - by Cj Anderson
    Greetings all, Is there a built in way to know when a user is done typing into a textbox? (Before hitting tab, Or moving the mouse) I have a database query that occurs on the textchanged event and everything works perfectly. However, I noticed that there is a bit of lag of course because if a user is quickly typing into the textbox the program is busy doing a query for each character. So what I was hoping for was a way to see if the user has finished typing. So if they type "a" and stop then an event fires. However, if they type "all the way" the event fires after the y keyup. I have some ideas floating around my head but I'm sure they aren't the most efficient. Like measuring the time since the last textchange event and if it was than a certain value then it would proceed to run the rest of my procedures. let me know what you think. Language: VB.NET Framework: .Net 2.0 --Edited to clarify "done typing"

    Read the article

  • signed angle between two 3d vectors with same origin within the same plane? recipe?

    - by Advanced Customer
    Was looking through the web for an answer but it seems like there is no clear recipe for it. What I need is a signed angle of rotation between two vectors Va and Vb lying within the same 3D plane and having the same origin knowing that: the plane contatining both vectors is an arbitrary and is not parallel to XY or any other of cardinal planes Vn - is a plane normal both vectors along with the normal have the same origin O = { 0, 0, 0 } Va - is a reference for measuring the left handed rotation at Vn The angle should be measured in such a way so if the plane would be XY plane the Va would stand for X axis unit vector of it. I guess I should perform a kind of coordinate space transformation by using the Va as the X-axis and the cross product of Vb and Vn as the Y-axis and then just using some 2d method like with atan2() or something. Any ideas? Formulas?

    Read the article

  • Multiple Producers Single Consumer Queue

    - by Talguy
    I am new to multithreading and have designed a program that receives data from two microcontroller measuring various temperatures (Ambient and Water) and draws the data to the screen. Right now the program is singly threaded and its performance SUCKS A BIG ONE. I get basic design approaches with multithreading but not well enough to create a thread to do a task but what I don't get is how to get threads to perform seperate task and place the data into a shared data pool. I figured that I need to make a queue that has one consumer and multiple producers (would like to use std::queue). I have seen some code on the gtkmm threading docs that show a single Con/Pro queue and they would lock the queue object produce data and signal the sleeping thread that it is finished then the producer would sleep. For what I need would I need to sleep a thread, would there be data conflicts if i didn't sleep any of the threads, and would sleeping a thread cause a data signifcant data delay (I need realtime data to be drawn 30 frames a sec) How would I go about coding such a queue using the gtkmm/glibmm library.

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • Replace text in code with counting numbers

    - by Gpx
    Hi, due to testing and time measuring i have to write some kind of log into existing c# winforms code in Visual Studio 2010. I want to hold the changes and the work very small so my question is about replacing text in my call by counting numbers. Lets say i want to past a line like Log.WriteLine(position) many times in the code and then replace "position" with numbers starting from 1 to n in on turn. I cant use a counter in this case because of many loops i don´t get the right position. Thanks for any suggestions, Gpx

    Read the article

  • Android OpenGL deltaTime Measure issue

    - by droidmachine
    I'm measuring deltaTime like that: deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); But I'm always getting different float values and because of that my game is stuttering.What can be the problem of that? 09-03 03:51:59.219: D/a(21807): 0.017184043 09-03 03:51:59.234: D/a(21807): 0.016405167 09-03 03:51:59.249: D/a(21807): 0.018071748 09-03 03:51:59.269: D/a(21807): 0.015293334 09-03 03:51:59.284: D/a(21807): 0.016080335 09-03 03:51:59.299: D/a(21807): 0.018669458 09-03 03:51:59.314: D/a(21807): 0.014720625 09-03 03:51:59.334: D/a(21807): 0.01605596 09-03 03:51:59.349: D/a(21807): 0.017086169

    Read the article

  • 16 millisecond quantization when sending/receivingtcp packets

    - by MKZ
    Hi, I have a C++ application running on windows xp 32 system sending and receiving short tcp/ip packets. Measuring (accurately) the arrival time I see a quantization of the arrival time to 16 millisecond time units. (Meaning all packets arriving are at (16 )xN milliseconds separated from each other) To avoid packet aggregation I tried to disable the NAGLE algorithm by setting the IPPROTO_TCP option to TCP_NODELAY in the socket variables but it did not help I suspect that the problem is related to the windows schedular which also have a 16 millisecond clock.. Any idea of a solution to this problem ? Thanks

    Read the article

  • Detecting and reloading updated application parameters at runtime

    - by VeeKayBee
    I am working on an ASP.NET web application(using .NET 4.5 and C#).The application deals with lot of units (for measuring like KG,Litre,KM etc). So based on the selected unit we have to implement some allowed range.This values can be configured without much effort. We identified two solutions for this Keeping a configuration xml. Suppose the values in xml, does it requires an iisreset or any other thing which can take the site down for some time, if we are changing the xml file to change some validation. Keeping in Db, then use SQL dependency caching. So an update to DB can reflect the caching values.SO i believe if we change the values, it will update the cache. How much complex is this and does it effect the performance ? It will be great helpful, if we have some other method to achieve this. Thanks in advance.

    Read the article

  • Methods for breaking up content client-side into multiple "pages"

    - by Tom Genoni
    I have a long HTML text-only article formatted with paragraph tags. What I'd like to do is break this content into N number of divs so that I can create individual pages. So, for instance, on an iPad/iPhone, instead of reading one long page the user could swipe right/left to navigate to pages. My initial javascript attempts have been somewhat convoluted: creating an array of the text, measuring line-heights, device window heights, adding closing/opening paragraph tags and the end/beginning of pages. Thoughts on a good way to approach this? I will not have access to server-side processing, this has to be a client-side solution.

    Read the article

  • Eclipse plugin to measure programmer performance/stats

    - by trenki
    Does anyone know of an Eclipse plugin that can give me some stats about my behavior/usage of the Eclipse IDE? There are quite a few things I would like to know: How often/when do I invoke the "Build All" command (through Ctrl+B) How often does compilation fail/succeed (+ number of errors/warnings) How often do I hit Backspace? (I do that way to often; If pressing that key would give a nasty sound I would in time learn to type correctly in the first place) How many characters/lines of code that I typed do I delete (possibly quite immediately) How (effective/efficient/...) is my Mouse/Keyboard/IDE usage? (Kinda like measuring APM in StarCraft; this could be fun) If there is no such Eclipse plugin around, how complex and time consuming would It be to write a plugin that can accomplish the above?

    Read the article

  • Best neural network for certain type of pattern analysis?

    - by fred basset
    Hi All, I'm working on a system that will send telemetry data on machine operation back to a central server for analysis. One of the machine parameters we're measuring is motor current drawn vs time. After an operation is finished we plan to send back an array of currents vs time to the server. A successful operation would have a pattern like a trapezoid, problematic operations would have a pattern completely different, more like a large spike in values. Can anyone recommend a type of neural network that would be good at classifying these 1D vectors of current values into a pass/fail type output? Thanks, Fred

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • New Pluralsight Course: HTML5 Canvas Fundamentals

    - by dwahlin
      I just finished up a new course for Pluralsight titled HTML5 Canvas Fundamentals that I had a blast putting together. It’s all about the client and involves a lot of pixel manipulation and graphics creation which is challenging and fun at the same time. The goal of the course is to walk you through the fundamentals, start a gradual jog into the API functions, and then start sprinting as you learn how to build a business chart canvas application from scratch that uses many of the available APIs . It’s fun stuff and very useful in a variety of scenarios including Web (desktop or mobile) and even Windows 8 Metro applications. Here’s a sample video from the course that talks about building a simple bar chart using the HTML5 Canvas:   Additional details about the course are shown next.   HTML5 Canvas Fundamentals The HTML5 Canvas provides a powerful way to render graphics, charts, and other types of visual data without relying on plugins such as Flash or Silverlight. In this course you’ll be introduced to key features available in the canvas API and see how they can be used to render shapes, text, video, images, and more. You’ll also learn how to work with gradients, perform animations, transform shapes, and build a custom charting application from scratch. If you’re looking to learn more about using the HTML5 Canvas in your Web applications then this course will break down the learning curve and give you a great start!    Getting Started with the HTML5 Canvas Introduction HTML5 Canvas Usage Scenarios Demo: Game Demos Demo: Engaging Applications Demo: Charting HTML5 Canvas Fundamentals Hello World Demo Overview of the Canvas API Demo: Canvas API Documentation Summary    Drawing with the HTML5 Canvas Introduction Drawing Rectangles and Ellipses Demo: Simple Bar Chart Demo: Simple Bar Chart with Transforms Demo: Drawing Circles Demo: Using arcTo() Drawing Lines and Paths Demo: Drawing Lines Demo: Simple Line Chart Demo: Using bezierCurveTo() Demo: Using quadraticCurveTo() Drawing Text Demo: Filling, Stroking, and Measuring Text Demo: Using Canvas Transforms with Text Drawing Images Demo: Using Image Functions Drawing Videos Demo: Syncing Video with a Canvas Summary    Manipulating Pixels  Introduction Rendering Gradients Demo: Creating Linear Gradients Demo: Creating Radial Gradients Using Transforms Demo: Getting Started with Transform Functions Demo: Using transform() and and setTransform() Accessing Pixels Demo: Creating Pixels Dynamically Demo: Grayscale Pixels Animation Fundamentals Demo: Getting Started with Animation Demo: Using Gradients, Transforms, and Animations Summary    Building a Custom Data Chart Introduction Creating the CanvasChart Object Creating the CanvasChart Shell Code Rendering Text and Gradients Rendering Data Points Text and Guide Lines Connecting Data Point Lines Rendering Data Points Adding Animation Adding Overlays and Interactivity Summary     Related Courses:  

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • Gauging Maturity of your BPM Strategy - part 1 / 2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this post I will discuss the essence of maturity assessment and the business imperative for doing the same in the context of BPM. Social psychology purports that an individual progresses from being a beginner to an expert in a given activity or task along four stages of self-awareness: Unconscious Incompetence where the individual does not understand or know how to do something and does not necessarily recognize the deficit and may even deny the usefulness of the skill. Conscious Incompetence where the individual recognizes the deficit, as well as the value of a new skill in addressing the deficit. Conscious Competence where the individual understands or knows how to do something but demonstrating the skill requires explicit concentration. Unconscious Competence where the individual has had so much practice with a skill that it has become "second nature" and serves as a basis of developing other complementary skills. We can extend the above thinking to an organization as a whole by measuring an organization’s level of competence in a specific area or capability, as an aggregate of the competence levels of individuals it is comprised of. After all organizations too like individuals, evolve through experience, develop “memory” and capabilities that are shaped through a constant cycle of learning, un-learning and re-learning. Hence the key to organizational success lies in developing these capabilities to enable execution of its strategy in-line with the external environment i.e. demand, competition, economy etc. However developing a capability merits establishing a base line in order to Assess the magnitude of improvement from past investments Identify gaps and short-comings Prioritize future investments in the right areas A maturity assessment is essentially an organizational self-awareness check that is aimed at depicting the “as-is” snapshot of an existing capability in-order to guide future investments to develop that capability in-line with business goals. This effectively is the essence of a maturity Organizational capabilities stem through its architecture, routines, culture and intellectual resources that are implicitly and explicitly embedded in its business processes. Given that business processes underpin realization of organizational capabilities, is what has prompted business transformation and process management efforts. Thus, the BPM capability of an organization needs to be measured on an on-going basis to ensure delivery of its planned benefits. In my next post I will describe Oracle’s BPM Maturity assessment methodology.

    Read the article

  • Oracle Customer Success Forum - Batesville - Oracle Sales Cloud - June 24th, 5pm CET

    - by Richard Lefebvre
    Batesville uses Oracle Sales Cloud to create a common platform and standardize processes for business transformation across field sales and telesales. Using real-time KPI dashboards, they are measuring their business success with consistency across their sales reps.We are pleased to invite you to a discussion with Batesville on industry trends, why sales automation is important, reasons for choosing Oracle Sales Cloud, and the vendor evaluation process. Please click on the register button to confirm your attendance by 5:00 p.m. Pacific Time on June 23, 2014.Speakers: Diane Kinker, Director CRM Program Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile:Batesville (www.Batesville.com), a wholly owned subsidiary of Hillenbrand, Inc. (NYSE:HI), is the leader in the North American death care industry. For more than 125 years, Batesville has been dedicated to helping families honor the lives of those they love®. Batesville’s innovation has changed the face of funeral service, from advancements in manufacturing and quality to patented features and memorialization offerings, technology and web-based solutions, and profit-enhancing merchandising systems and room displays. Our history of manufacturing excellence, product innovation, superior customer service and reliable delivery has helped Batesville become – and remain – a market leader. Event Description:In this informal reference call, you will have the opportunity to hear Batesville discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and the vendor evaluation process. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call.Why Oracle:Batesville looked to transform its sales automation processes. Oracle Sales Cloud met these needs and Batesville’s requirements for: Standardized end-to-end Sales Processes including Sales Performance Management (territory management, quota management and incentive compensation) Mobile capabilities with integration to Microsoft Outlook and Smartphones Creation of the WIG Dashboard (Wildly Important Goal) using reporting and analytics Click the Register Now button to confirm your attendance for this informative event. Registration will close at 5:00 p.m. Pacific Time on June 23, 2014.After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Batesville will revise the registrants list and may dismiss registrations as they see fit. Register Now!

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • Duplicity Full Backup Lifetime and Efficiency

    - by Tim Lytle
    I'm trying to work up a backup strategy for some clients, and am leaning towards duplicity for remote backup (already use rdiff-backup for internal/on location backups). Is it reasonable to want a full backup every so often? Since duplicity increments forward, each incremental backup is relying on the previous increment, and all are relying heavily on the last full backup. Should that become corrupt, bad things happen. A related question: Does Duplicity test the incremental backups for consistency? Assuming I do want a full backup every so often, how efficiently does duplicity create that full backup? Can/does it check file signatures and copy unchanged data from previous full backups/increments? Basically creating a new 'full' archive transferring new/changed data and merging existing unchanged data? Right now my concern is that running a full backup is needed, but the consistent large bandwidth use of full backups will make this unreasonable for some clients.

    Read the article

  • Cisco ASA bonding/teaming/port-channel capabilities

    - by Antoine Benkemoun
    Hello, This seemed to me like a really simple question that I would be able to answer by myself but I have not been able to find any info on this subject. I have a Cisco ASA 5510 which has 4 FastEthernet interfaces. I was wondering if it would be possible to use 2 or 3 of these interfaces as a port-channel in order to agregate bandwidth for multiple VLANs. I have found no info on the Cisco website nor on Google. Is this just a stupid/crazy idea or am I missing something ? Thank you in advance for your help, Antoine

    Read the article

  • How should I force-enable BIND's persistent cache, or Unbound's persistent cache

    - by Jacob Rabinsun
    I am trying to run a local DNS server on my home computer so that I can both increase DNS lookups speed and reduce bandwidth use, so that both my laptop and my PC can do lookups faster. I have got BIND 9 running very smoothly, there is only one simple problem, and that being the fact that BIND is not a persistent DNS cache, and if I restart its service, the whole cash would be wiped out. So, is there a way that I could make BIND9 keep its cache after system restart? Also, which one is better Unbound or BIND? Which one would you suggest? Does Unbound DNS have a persistent cache or can it be enabled?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >