Search Results

Search found 2210 results on 89 pages for 'techniques'.

Page 50/89 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Who use syslog for logging his web application

    - by user137246
    I was wondering if somebody use syslog to log his web application errors/warning/info ? It could be quite useful in a deployment environment with a lot of servers. If yes, what kind of client visualisation you can get to watch errors and grouping the same errors into batch? Do you use other techniques than syslog to achieve this kind of logging functionality?

    Read the article

  • Is strict mode more performant?

    - by sje397
    Does executing javascript within a browser in 'strict mode' make it more performant, in general? Do any of the major browsers do additional optimisation or use any other techniques that will improve performance in strict mode? Edit: Since none of the major engines actually implement strict mode, I'll rephrase slightly: Is strict mode intended, amongst its other goals, to allow browsers to introduce additional optimisations or other performance enhancements?

    Read the article

  • Printf example in bash does not create a newline

    - by WolfHumble
    Working with printf in a bash script, adding no spaces after "\n" does not create a newline, whereas adding a space creates a newline, e. g.: No space after "\n" NewLine=`printf "\n"` echo -e "Firstline${NewLine}Lastline" Result: FirstlineLastline Space after "\n " NewLine=`printf "\n "` echo -e "Firstline${NewLine}Lastline" Result: Firstline Lastline Question: Why doesn't 1. create the following result: Firstline Lastline I know that this specific issue could have been worked around using other techniques, but I want to focus on why 1. does not work.

    Read the article

  • Session attacks, what are the new breeds of attacks ?

    - by user352321
    Hello, I am collecting as information as possible about http(s) session attacks. There is a plenty of information about existing attacks, but, i would like to know if some new breeds of attacks are now made possible either by security flaws in popular software or technologies or by new smarter security engineering. Do you have some recommendations about new techniques or tools ? Thanks,

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

    - by pinaldave
    Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet. Usain Bolt – World’s Fastest Man “Can you teach me SQL Server Performance Tuning?” This is one of the most popular questions which I receive all the time. The answer is YES. I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job. Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place t one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best! There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me. Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners. SQL Server Performance: Introduction to Query Tuning  SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server. In this almost four hour course we cover following important concepts. Introduction 10:22 Execution Plan Basics 45:59 Essential Indexing Techniques 20:19 Query Design for Performance 50:16 Performance Tuning Tools 01:15:14 Tips and Tricks 25:53 Checklist: Performance Tuning 07:13 The duration of each module is mentioned besides the name of the module. SQL Server Performance: Indexing Basics This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes. In this almost two hour course we cover following important concepts. Introduction 02:03 Fundamentals of Indexing 22:21 Practical Indexing Implementation Techniques 37:25 Index Maintenance 16:33 Introduction to ColumnstoreIndex 08:06 Indexing Practical Performance Tips and Tricks 24:56 Checklist : Index and Performance 07:29 The duration of each module is mentioned besides the name of the module. SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. In this almost 2 hours and 15 minutes course we cover following important concepts. Introduction 00:54 Retrieving IDENTITY value using @@IDENTITY 08:38 Concepts Related to Identity Values 04:15 Difference between WHERE and HAVING 05:52 Order in WHERE clause 07:29 Concepts Around Temporary Tables and Table Variables 09:03 Are stored procedures pre-compiled? 05:09 UNIQUE INDEX and NULLs problem 06:40 DELETE VS TRUNCATE 06:07 Locks and Duration of Transactions 15:11 Nested Transaction and Rollback 09:16 Understanding Date/Time Datatypes 07:40 Differences between VARCHAR and NVARCHAR datatypes 06:38 Precedence of DENY and GRANT security permissions 05:29 Identify Blocking Process 06:37 NULLS usage with Dynamic SQL 08:03 Appendix Tips and Tricks with Tools 20:44 The duration of each module is mentioned besides the name of the module. SQL in Sixty Seconds You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them? Here are three of my latest videos: Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 Copy Column Headers from Resultset – SQL in Sixty Seconds #027 Effect of Collation on Resultset – SQL in Sixty Seconds #026 You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet! Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology, Video

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • How to run Firefox in Protected Mode? (i.e. at low integrity level)

    - by Ian Boyd
    i noticed that Firefox, unlike Chrome and Internet Explorer, doesn't run in the Low Mandatory Level (aka Protected Mode, Low Integrity) Google Chrome: Microsoft Internet Explorer: Mozilla Firefox: Following Microsoft's instructions, i can manually force Firefox into Low Integrity Mode by using: icacls firefox.exe /setintegritylevel Low But Firefox doesn't react well to not running with enough rights: i like the security of knowing that my browser is running with less rights than i have. Is there a way to run Firefox into low rights mode? Is Mozilla planning on adding "protected mode" sometime? Has someone found a workaround to Firefox not handling low rights mode? Update From a July 2007 interview with Mike Schroepfer, VP of Engineering at the Mozilla Foundation: ...we also believe in defense in depth and are investigating protected mode along with many other techniques to improve security for future releases. After a year and a half it doesn't seem like it is a priority.

    Read the article

  • Apache error_log repeated attempts to access forum.php

    - by bMon
    About every two seconds I am getting: [Sat Feb 19 19:00:01 2011] [error] [client 69.239.204.217] script '/var/www /html/forum.php' not found or unable to stat [Sat Feb 19 19:00:04 2011] [error] [client 69.239.204.217] File does not exist: /var/www/html/404.shtml ..in my /var/log/httpd/error_log file. Sometimes the request will be for forum_asp.php. I'm assuming its a bot trying to access insecure forum files, but I'm not so sure since it appears each is a unique IP and not just a few rouge IPs hitting it consecutively. And whois results of the ip's aren't all the classic ISP in Russia or China, they are more end user address (comcast, etc). Any insight into whats going on here would be appreciated. Also, any techniques people use to do a "live monitor" of web traffic would be appreciated. Right now I'm doing a: tail -f error_log Thanks.

    Read the article

  • DNA and Quantum computing

    - by Jacques
    I recently(A couple of weeks ago) read an article about the future of processing and how quantum-processors and DNA-processors(DNA-computing) are the future competitors of computing since both will completely outperform the computers of this era. In terms of processing speeds, what do we expect from these two different processing techniques ? Personally I believe that DNA-processing will be a major step towards AI. For labs and office work I think quantum-processing which will be more logical. I'm quite excited that i'm still so young - to see what the future of technology holds! Then again my parents will soon find out what the after-life holds... just as bloody exciting, if not more..

    Read the article

  • How to temporarily stop web pages intercepting a right-click?

    - by callum
    Some websites like Google Drive intercept right-clicks and provide their own context menu. This is usually what I want, but sometimes I still want to do a normal, native browser right-click. Is there any modifier key or something like that to make it so a right-click doesn't trigger a click event on the web page, so it can't show it's context menu? I want to do this per-click, not by changing a setting somewhere. If there are different techniques depending on browser/OS, please list them.

    Read the article

  • What causes winsock 10055 errors? How should I troubleshoot?

    - by Tom Kerr
    I'm investigating some issues with winsock 10055 errors on a chain of custom applications (some of which we control, some not) and was hoping to get some advice on techniques to troubleshoot the problem. No buffer space available. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. From research, non-paged pool and ports seem to be the only resources which can cause this error. Is there another resource which might cause 10055 errors? Currently, we have perfmon counters setup on the applications and non-paged pool usage looks low in most circumstances. Open TCP connections looks low and I am unaware of another way to monitor ports. Since it only happens in production, we are unable to use more invasive counters. Is there some other tool or procedure you would recommend to diagnose which application is causing the issue?

    Read the article

  • Agile sysadmin and devops - How to accomplish?

    - by Marco Ramos
    Nowadays, agile systems adminitration and devops are some of the most trending topics regarding systems administration and operations. Both these concepts are mainly focused on bridging the gap between operations/sysadmins and the projects (developers, business, etc). Even if you never heard of the devops concept, I'm sure that this topic is your concern too. So, what tools and techniques do you use to accomplish devops in you companies? I'm particularly interested in topics like change management, continous integration and automatization, but not only. Please share your thoughts. I'm looking forward to read your answers/opinions :)

    Read the article

  • Capture live streaming

    - by acidzombie24
    I want to capture a rtmp stream. The videos are live, different every day and usually i cant tune in because i am busy at work doing something :(. I would like to capture the stream however they use anti capturing techniques (its live and free so i dont understand why). I tried orbit downloader without any luck. The url seems kind of weird (judging by grab++). It has || in it and other urls. What applications can i use to capture this? i am open to using linux

    Read the article

  • Thinkpad CPU temperature reaches 100°C after 10 minutes of heavy use

    - by Nicolas Raoul
    Less than two years ago, I bought a Lenovo Thinkpad R500. When using the CPU heavily, it sometimes decides to reboot. I am using Linux, and a widget shows me the CPU temperature. The PC usually reboots when the temperature of the first core gets over 100°C Since I switched from Ubuntu 2010.10 32bit to Ubuntu 2011.04 64bit, it crashes much more often. I survive using these techniques: When in the office, I use a laptopcooler. When outside, I underclock to 800 MHz, by doing so it does not get too warm. QUESTION: Is it normal that a Thinkpad crashes if the CPU is at 100% for more than 10 minutes? How can I use the full power of my computer, for instance when showing CPU-hungry business intelligence tools to clients? The computer does not really feel that hot. Maybe the sensor itself is reporting wrong temperatures? Can I tell the BIOS to ignore it?

    Read the article

  • Can not run ifconfig like commands via browser

    - by savruk
    Hi, Problem is I cannot run "ifconfig" or similar commands via browser. Environment: Programming language : python Server : lighttpd(CGI) , running on busybox. Well machine is really small and so I am really restricted. Tried techniques: chown every script to root. But there is no differences. Why? Because lighttpd runs under another user, I mean not under root. As it is not root, when I try to run script from browser it always calls the python file with its uid. So it makes it impossible to run "ifconfig eth0 192.168.2.123" like commands via web browser. I get "ifconfig: SIOCSIFADDR: Permission denied" error. What can I do? I do not have any sudoers file, so cannot modify sudo command. Well, I don't even have "sudo" command :) Thanks for your help

    Read the article

  • Reviewing firewall rules

    - by chmeee
    I need to review firewall rules of a CheckPoint firewall for a customer (with 200+ rules). I have used FWDoc in the past to extract the rules and convert them to other formats but there was some errors with exclusions. I then analyze them manually to produce an improved version of the rules (usually in OOo Calc) with comments. I know there are several visualization techniques but they all go down to analyzing the traffic and I want static analysis. So I was wondering, what process do you follow to analyze firewall rules? What tools do you use (not only for Checkpoint)?

    Read the article

  • Backup strategies for linux based file servers

    - by iceman
    I want to know some enterprise-wide backup strategies used for linux based file servers. What are the tools and techniques used when making a backup. for e.g when a backup fails on a machine, it should email the admin about the failure and also a log file. This won't happen incase the HDD fails and the system is completely out of work, but in other cases where a backup didn't take place, the admin should be able to know. What tool/scripts can be used for this particular scenarios?

    Read the article

  • How to avoid printing nearly blank pages?

    - by joelarson
    How many times have you printed an email just to have the last page be 2 or 3 lines of a person's signature (or worse, the "This is confidential" copy inserted by corporate mailservers)? How many times has the last page contained just the footer of the website? Does anyone know of a utility or print driver that can help avoid printing blank or nearly-blank pages? I am not looking for techniques for avoiding this in specific programs -- if I take the effort of doing print preview and then adjusting the pages to be printed, then of course I can avoid it. What I want something I can install that, whenever I push "Print" to any of the various printers I print to with my laptop, it automatically says "hmmm... I bet he doesn't really want that page which is 95% empty" and possibly prompts me to say "do you really want to waste paper on this?"

    Read the article

  • Allowing non-admins to run programs as admins on Windows 7

    - by Josh
    On *nix, admins can use the setuid flag to allow non-admins to run certain programs that would otherwise require admin privileges. Is there any way to do something similar in Windows 7? This question has been asked here before for Windows XP, and the answers were generally unsatisfying. I'm wondering if Windows 7 provides a better way. One idea I can think of would be to use Microsoft's Subsystem for UNIX Applications, but I'd rather not install that on every user's system if I can avoid it. Another idea I can think of (which would work on XP too, but I haven't seen it mentioned anywhere) would be to create a RunAsAdmin application that runs as a service, that takes a whitelist of "safe" apps and can be asked (from a command line, batch file or script) to run any program on the list as LocalSystem or whatever account the service uses. Is this possible? Are there any solutions that aren't as clunky as those? Or, has anyone implemented either of the above techniques successfully?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >