Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 89/1257 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • I've created a software RAID on SUSE EL 10 and now I need to monitor it.......

    - by Thomas B.
    I have created a Software Raid using the yast2 GUI on SUSE ES 10/11. The raid works great and it's a raid 5. I have 5 Drives they are cheap 2GB Cases that have 2 - 1TB Drives in each case (Serial ATA Drives) and I connect them in via Esata to the motherboard. The problem I have as this is "cheap" storage when of the the 5 drives goes out on the RAID I seem to have no logs of any issues and it get's harder and harder to write to it until it dies. I use SAMBA to mount the 4TB parition to my PC's in my home on a GIG network. My question is this, are there any good (Free) tools in Linux to monitor a raid or the drives on the raid to detect any problems??? I haven't found any yet and was just wondering if some exist.

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • Does the <script> tag position in HTML affects performance of the webpage?

    - by Rahul Joshi
    If the script tag is above or below the body in a HTML page, does it matter for the performance of a website? And what if used in between like this: <body> ..blah..blah.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> ... some text here too ... </body> Or is this better?: <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. </body> Or this one?: <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> </body> Need not tell everything is again in the <html> tag!! How does it affect performance of webpage while loading? Does it really? Which one is the best, either out of these 3 or some other which you know? And one more thing, I googled a bit on this, from which I went here: Best Practices for Speeding Up Your Web Site and it suggests put scripts at the bottom, but traditionally many people put it in <head> tag which is above the <body> tag. I know it's NOT a rule but many prefer it that way. If you don't believe it, just view source of this page! And tell me what's the better style for best performance.

    Read the article

  • How do large companies handle software updates for users without administrative rights?

    - by CT
    I just started working for a small-medium size company doing IT support. Maybe 150 or less users. Right now every user has administrative rights to their own machine. This allows them to install updates or whatever else they would like to. I'm tired of getting on user's machines that are bloated with crap they put on themselves. So my first thought would be to take away administrative rights to their computer. This would also have other advantages such as preventing a lot of drive-by malware on the web etc. The problem arises that users are unable to install updates. (Even though I find most ignore these anyway) How do large companies handle software updates on all client machines? EDIT: Windows environment. Most servers are Windows Server 2003 Enterprise. Clients are all Windows. Win XP, Vista, and 7.

    Read the article

  • Need to detect the same application open on another computer on the network. Any software around tha

    - by Joe Schmoe
    I have a time management application that I use at home quite a lot and have running most of the time. At home, I have a desktop PC and a couple of laptops scattered around the house...all networked together. Unfortunately, the application I use is not multi-user and I risk losing/corrupting data if it has been left running on one computer inadvertently while I start using it on another one in another part of the house. I use Live Mesh to automatically keep the application's database synced across the different computers and I just need some way of making sure that I don't start using the application on another computer before closing it down on the previous one. Anyone know of any Windows software that can detect if an application is running simultaneously on different computers on my network, and warn me if I am about to have two open at the same time?

    Read the article

  • Is there performance to be gained by moving storage allocation local to a member function to its cla

    - by neuviemeporte
    Suppose I have the following C++ class: class Foo { double bar(double sth); }; double Foo::bar(double sth) { double a,b,c,d,e,f a = b = c = d = e = f = 0; /* do stuff with a..f and sth */ } The function bar() will be called millions of times in a loop. Obviously, each time it's called, the variables a..f have to be allocated. Will I gain any performance by making the variables a..f members of the Foo class and just initializing them at the function's point of entry? On the other hand, the values of a..f will be dereferenced through this-, so I'm wondering if it isn't actually a possible performance degradation. Is there any overhead to accessing a value through a pointer? Thanks!

    Read the article

  • What software works well for viewing massive TIFF images on Windows 7?

    - by nhinkle
    Today I saw an article about a half-gig, 24000 square-pixel high-res composite image of the moon. (This is a much smaller version of the image) I find astronomy interesting, so I thought I'd download it and take a look. With 4GB of RAM and an i5 processor, I figured my computer could handle it. Unfortunately, the built-in Windows Picture Viewer didn't do such a great job. While it opened the file without a problem, zooming in was ineffective. The zoomed out image loaded, but zooming in just showed a scaled-up version of the zoomed-out version, not any detail: Closing the picture viewer also took a very long time, and the whole process used up much more RAM than the 500MB of the picture (usage went from 1.3GB to 3.8GB). What other software would work better for this? I would prefer something that is free and fairly simple. I don't really want to use an editor (like photoshop or GIMP), just a nice lightweight viewer. Any suggestions?

    Read the article

  • Does it make a difference in performance if I use self.fooBar instead of fooBar?

    - by mystify
    Note: I know exactly what a property is. This question is about performance. Using self.fooBar for READ access seems a waste of time for me. Unnecessary Objective-C messaging is going on. The getters typically simply pass along the ivar, so as long as it's pretty sure there will be no reasonable getter method written, I think it's perfectly fine to bypass this heavy guy. Objective-C messaging is about 20 times slower than direct calls. So if there is some high-performance-high-frequency code with hundreds of properties in use, maybe it does help a lot to avoid unnessessary objective-c messaging? Or am I wasting my time thinking about this?

    Read the article

  • Software RAID 1 broken, how do I fix this?

    - by Edward
    I'm running CentOS 6 x86_64. There is a software RAID 1 being used on the two internal 80GB drives. I got the following e-mail sent to me: A DegradedArray event had been detected on md device /dev/md1. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid1] md0 : active raid1 sda1[0] 511988 blocks super 1.0 [2/1] [U_] md1 : active raid1 sda2[0] 8190968 blocks super 1.1 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk md4 : active raid1 sdc1[0] sdb1[1] 1953512400 blocks super 1.2 [2/2] [UU] md3 : active raid1 sdd5[1] sda5[0] 61224892 blocks super 1.1 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk md2 : active raid1 sdd3[1] sda3[0] 8190968 blocks super 1.1 [2/2] [UU] unused devices: <none> The system appears to have booted fine and is working. The two drives' content did not change at all. I only removed and reinstalled them while I was booted on the CentOS Live DVD. How do I get the array working again?

    Read the article

  • How do you optimize database performance when providing results for autocomplete/iterative search?

    - by Howiecamp
    Note: In this question I'm using the term "autocomplete" (or "iterative search") to refer to returning search-as-you-type results, e.g. like Google Search gives you. Also my question is not specific to web applications vs. fat client apps. How are SQL SELECT queries normally constructed to provide decent performance for this type of query, especially over arbitrarily large data sets? In the case where the search will only query based on the first n characters (easiest case) am I still issuing a new SELECT result FROM sometable WHERE entry LIKE... on each keypress. Even with various forms of caching this seems like it might result in poor performance. In cases where you want your search string to return results with prefix matches, substring matches, etc. it's an even more difficult problem. Looking at a case of searching a list of contacts, you might return results that match FirstName + LastName, LastName + FirstName, or any other substring.

    Read the article

  • Why are there performance differences when a SQL function is called from .Net app vs when the same c

    - by Dan Snell
    We are having a problem in our test and dev environments with a function that runs quite slowly at times when called from an .Net Application. When we call this function directly from management studio it works fine. Here are the differences when they are profiled: From the Application: CPU: 906 Reads: 61853 Writes: 0 Duration: 926 From SSMS: CPU: 15 Reads: 11243 Writes: 0 Duration: 31 Now we have determined that when we recompile the function the performance returns to what we are expecting and the performance profile when run from the application matches that of what we get when we run it from SSMS. It will start slowing down again at what appear to random intervals. We have not seen this in prod but they may be in part because everything is recompiled there on a weekly basis. So what might cause this sort of behavior?

    Read the article

  • How to Audit Database Activity without Performance and Scalability Issues?

    - by GotoError
    I have a need to do auditing all database activity regardless of whether it came from application or someone issuing some sql via other means. So the auditing must be done at the database level. The database in question is Oracle. I looked at doing it via Triggers and also via something called Fine Grained Auditing that Oracle provides. In both cases, we turned on auditing on specific tables and specific columns. However, we found that Performance really sucks when we use either of these methods. Since auditing is an absolute must due to regulations placed around data privacy, I am wondering what is best way to do this without significant performance degradations. If someone has Oracle specific experience with this, it will be helpful but if not just general practices around database activity auditing will be okay as well.

    Read the article

  • Java - Can i have a faster performance for this loop ?

    - by Brad
    I am reading a book and deleting a number of words from it. My problem is that the process takes long time, and i want to make its performance better(Less time), example : Vector<String> pages = new Vector<String>(); // Contains about 1500 page, each page has about 1000 words. Vector<String> wordsToDelete = new Vector<String>(); // Contains about 50000 words. for( String page: pages ) { String pageInLowCase = page.toLowerCase(); for( String wordToDelete: wordsToDelete ) { if( pageInLowCase.contains( wordToDelete ) ) page = page.replaceAll( "(?i)\\b" + wordToDelete + "\\b" , "" ); } // Do some staff with the final page that does not take much time. } This code takes around 3 minutes to execute. If i skipped the loop of replaceAll(...) i can save more than 2 minutes. So is there a way to do the same loop with a faster performance ?

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Does AMD Cool n Quiet Slow Down Your System?

    - by Software Monkey
    I discovered today that having AMD Cool n Quiet enabled in my BIOS appears to be slowing down my Windows XP SP2 system by about 29% on memory & CPU intensive workloads. I was wondering if (a) anyone else had encountered this, (b) anyone can offer an explanation, (c) there are any negatives I need to be aware of if I keep AMD CnQ disabled. With some superficial testing so far, I don't immediately notice any difference with CnQ off (other than the performance being what I expected from this new hardware). It seems to ramp up the CPU fan a little bit as my program maxes out 1 core, but that's the same as with CnQ on. And when I let the system idle the CPU fan slows down and the systems as quiet as a mouse (after years of 6 small fans churning like they want to go into orbit it's nice to again have a system where I can hear the HDDs seeking). Bonus question: Does CnQ cause issues with system stability? I ask because the reason I disabled it was because I have had a few freezes and 1 spontaneous reboot with my new hardware.

    Read the article

  • Abstraction, Politics, and Software Architecture

    Abstraction can be defined as a general concept and/or idea that lack any concrete details. Throughout history this type of thinking has led to an array of new ideas and innovations as well as increased confusion and conspiracy. If one was to look back at our history they will see that abstraction has been used in various forms throughout our past. When I was growing up I do not know how many times I heard politicians say “Leave no child left behind” or “No child left behind” as a major part of their campaign rhetoric in regards to a stance on education. As you can see their slogan is a perfect example of abstraction because it only offers a very general concept about improving our education system but they do not mention how they would like to do it. If they did then they would be adding concrete details to their abstraction thus turning it in to an actual working plan as to how we as a society can help children succeed in school and in life, but then they would not be using abstraction. By now I sure you are thinking what does abstraction have to do with software architecture. You are valid in thinking this way, but abstraction is a wonderful tool used in information technology especially in the world of software architecture. Abstraction is one method of extracting the concepts of an idea so that it can be understood and discussed by others of varying technical abilities and backgrounds. One ways in which I tend to extract my architectural design thoughts is through the use of basic diagrams to convey an idea for a system or a new feature for an existing application. This allows me to generically model an architectural design through the use of views and Unified Markup Language (UML). UML is a standard method for creating a 4+1 Architectural View Models. The 4+1 Architectural View Model consists of 4 views typically created with UML as well as a general description of the concept that is being expressed by a model. The 4+1 Architectural View Model: Logical View: Models a system’s end-user functionality. Development View: Models a system as a collection of components and connectors to illustrate how it is intended to be developed.  Process View: Models the interaction between system components and connectors as to indicate the activities of a system. Physical View: Models the placement of the collection of components and connectors of a system within a physical environment. Recently I had to use the concept of abstraction to express an idea for implementing a new security framework on an existing website. My concept would add session based management in order to properly secure and allow page access based on valid user credentials and last user activity.  I created a basic Process View by using UML diagrams to communicate the basic process flow of my changes in the application so that all of the projects stakeholders would be able to understand my idea. Additionally I created a Logical View on a whiteboard while conveying the process workflow with a few stakeholders to show how end-user will be affected by the new framework and gaining additional input about the design. After my Logical and Process Views were accepted I then started on creating a more detailed Development View in order to map how the system will be built based on the concept of components and connections based on the previously defined interactions. I really did not need to create a Physical view for this idea because we were updating an existing system that was already deployed based on an existing Physical View. What do you think about the use of abstraction in the development of software architecture? Please let me know.

    Read the article

  • Valuing "Working Software over Comprehensive Documentation"

    - by tom.spitz
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I subscribe to the tenets put forth in the Manifesto for Agile Software Development - http://agilemanifesto.org. As Oracle's chief methodologist, that might seem a self-deprecating attitude. After all, the agile manifesto tells us that we should value "individuals and interactions" over "processes and tools." My job includes process development. I also subscribe to ideas put forth in a number of subsequent works including Balancing Agility and Discipline: A Guide for the Perplexed (Boehm/Turner, Addison-Wesley) and Agile Project Management: Creating Innovative Products (Highsmith, Addison-Wesley). Both of these books talk about finding the right balance between "agility and discipline" or between a "predictive and adaptive" project approach. So there still seems to be a place for us in creating the Oracle Unified Method (OUM) to become the "single method framework that supports the successful implementation of every Oracle product." After all, the real idea is to apply just enough ceremony and produce just enough documentation to suit the needs of the particular project that supports an enterprise in moving toward its desired future state. The thing I've been struggling with - and the thing I'd like to hear from you about right now - is the prevalence of an ongoing obsession with "documents." OUM provides a comprehensive set of guidance for an iterative and incremental approach to engineering and implementing software systems. Our intent is first to support the information technology system implementation and, as necessary, support the creation of documentation. OUM, therefore, includes a supporting set of document templates. Our guidance is to employ those templates, sparingly, as needed; not create piles of documentation that you're not gonna (sic) need. In other words, don't serve the method, make the method serve you. Yet, there seems to be a "gimme" mentality in some circles that if you give me a sample document - or better yet - a repository of samples - then I will be able to do anything cheaply and quickly. The notion is certainly appealing AND reuse can save time. Plus, documents are a lowest common denominator way of packaging reusable stuff. However, without sustained investment and management I've seen "reuse repositories" turn quickly into garbage heaps. So, I remain a skeptic. I agree that providing document examples that promote consistency is helpful. However, there may be too much emphasis on the documents themselves and not enough on creating a system that meets the evolving needs of the business. How can we shift the emphasis toward working software and away from our dependency on documents - especially on large, complex implementation projects - while still supporting the need for documentation? I'd like to hear your thoughts.

    Read the article

  • Are there any good references coparing Software Development CM best practices to IT CM best practice

    - by dkackman
    I have spent my career on the software development side of things and in the latter part have become more and more involved in the realm of Software Configuration Management. Now I am moving into an IT group and need to ramp up on CM practices from that standpoint. Are there any good references (books, websites, blogs whatever) out there comparing Software CM practices to IT CM practices? Basically I'm in learning mode and am trying compare things I already know from the software development side to things on the IT side.

    Read the article

  • Looking for a Software to harden Windows machines

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So IF anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • MAPS software on colocated server

    - by Tomislav
    As a participant in MAPS program, are we allowed to place the software installation on a colocated server , or does it have to be installed in the office itself ? Also, is this scenario viable: Usimg MAPS windows as a OS, are we allowed to allow our own software to be publicly reachable ? ( we have a MAPS Windows driven server , and we would like to present software we made, and is installed on it to our customers as trial, to reach the software, Client would have to connect to windows by RDP)

    Read the article

  • What is a good open source software to manage my Tripp-lite UPS (Uninterruptible power supply)

    - by Beatle
    I have a Tripp-Lite Smart150rmxl2ua UPS. The software from their website doesn't seem to work properly. I also tried "apcupsd" which is an open source software which i am supposed to be able to use to manage my UPS and I had no luck with that either since Windows 7 did not want to install and use the drivers since they are "incompatible". Is there another good working open source software out there? It sucks that Tripp-Lite doesn't supply its customers with working software.

    Read the article

  • Automatic backup software?

    - by Mehrdad
    Every backup software I've seen (even the ones that claim "continuous" protection) only backs up files periodically -- say, every 5 minutes. What I'm looking for is true continuous backup software, i.e. software that can transparently back up files immediately before they are written to, so that I can be certain I have all the versions of a file that ever existed. Is there any software that does this?

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative

    - by pinaldave
    Ranking has improvement considerably from SQL Server 2000 to SQL Server 2005/2008 to SQL Server 2011. Here is the blog article where I wrote about SQL Server 2005/2008 paging method SQL SERVER – 2005 T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table. One can achieve this using OVER clause and ROW_NUMBER() function. Now SQL Server 2011 has come up with the new Syntax for paging. Here is how one can easily achieve it. USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO I consider it good enhancement in terms of T-SQL. I am sure many developers are waiting for this feature for long time. We will consider performance different in future posts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >