Search Results

Search found 1466 results on 59 pages for 'reliable'.

Page 2/59 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Join Hands With a Reliable Web Development Company

    A Website is basically a tool to communicate with your online audience. It is a multi-functional interactive platform of an organization or business. Your online presence provides you a way to spread a word about your services and products through the medium of Internet.

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • A more reliable and more flexible sp_MSforeachdb

    - by AaronBertrand
    I've complained about sp_MSforeachdb before. In part of my "Bad Habits to Kick" series in 2009-10, I described how I worked around its sporadic inability to actually process all of the databases on an instance: http://sqlblog.com/blogs/aaron_bertrand/archive/2010/02/08/bad-habits-to-kick-relying-on-undocumented-behavior.aspx I lumped this in a "Bad Habit" category of relying on undocumented behavior, since - while the procedure does have rampant usage - it is, in fact, both undocumented and unsupported....(read more)

    Read the article

  • Canon Inkjet Cartridges - Are They Trully Reliable?

    Indeed, canon inkjet cartridges are very well known for their dependability, color accuracy, and speed. They also keep printed image quality not minding of how long you use the printer. That simply m... [Author: Obinna Heche - Computers and Internet - April 09, 2010]

    Read the article

  • Making a more reliable and flexible sp_MSforeachdb

    While the system procedure sp_MSforeachdb is neither documented nor officially supported, most SQL Server professionals have used it at one time or another. This is typically for ad hoc maintenance tasks, but many people (myself included) have used this type of looping activity in permanent routines. Sadly, I have discovered instances where, under heavy load and/or with a large number of databases, the procedure can actually skip multiple catalogs with no error or warning message. Since this situation is not easily reproducible, and since Microsoft typically has no interest in fixing unsupported objects, this may be happening in your environment right now

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Reliable Niche Research is Essential

    Completing market and keyword research is surely an critical activity for anybody who works online. If you get the analysis wrong and your website is concentrating on the wrong keywords, you will not see any benefit from search engine optimization at all.

    Read the article

  • Is an NTBackup of a MySQL data directory reliable

    - by Justin Dearing
    This question was asked on the MySQL forums in 2004 with no answers. I'm installing MySQL 5.0.x on a Windows 2003 Server for use with Drupal. I began to configure the backup with mysqldump when it occurred to me that an ntbackup taken using shadow copying should be reliable enough for backing up the database. Is there any flaw in my logic?

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • Memory upgrade - is the site reliable?

    - by Yuval
    Hi, I have a late-2008 unibody macbook model with 2 GB of ram. I am looking to upgrade to 4 GB of ram. I looked about a month ago at Other World Computing's 4 GB upgrade kit and I remember it being around $80. I looked today, finally getting to buy it and it went up to almost $100. I found another site, memoryupgrade.pro that calls itself "Pro memory upgrade" and it looks legitimate - it sells the memory for around $80 in its own brand. The only thing is, I haven't been able to find any reviews about it, and I'm not sure if it actually is reliable. Does anybody have any experience with this site? Does anybody have any other suggestions for buying macbook memory? I have friends who bought from OWC and were happy, should I just spend the extra $30 (including shipping) and buy from them? Thanks!

    Read the article

  • How to setup a reliable SMTP server on Windows Server 2008 R2

    - by everwicked
    I know there are SMTP services out there which you can pay to send e-mails with but surely it's not that difficult to set up one of your own. How can I set up an SMTP server on Windows Server 2008 R2 that is: - Secure; only authorized users/hostnames/etc can send mail - Reliable; e-mails don't get lost - Not treated as spam; when e-mails are received from say gmail/outlook/hotmail they don't go straight to junk ** ** I understand this depends both on the server+e-mail headers AND e-mail content - I'm looking to safeguard the server part. Thanks!

    Read the article

  • How reliable is HDD SMART data?

    - by andahlst
    Based on SMART data, you can judge the health of a disk, at least that is the idea. If I, for instance, run sudo smartctl -H /dev/sda on my ArchLinux laptop, it says that the hard drive passed the self tests and that it should be "healthy" based on this. My question is how reliable this information is, or more specifically: If according to the SMART data this disk is healthy, what are the odds of the disk suddenly failing despite this? This assumes the failure is not due to some catastrophic event that impossibly could have been predicted, such as the laptop falling down on the floor causing the drive heads to hit the disk. If the SMART data does not say the disk is in good shape, what are the odds of the disk failing within some amount of time? Is it possible that there will be false positives and how common are these? Of course, I keep backups no matter what. I am mostly curious.

    Read the article

  • Recommendations for stable, reliable flash drives

    - by Josh Kelley
    We're looking to purchase some flash drives for use in some embedded devices. Most of our requirements aren't too different the generic "good, fast" flash drive: reliability is very important, speed is good, and so that the drive will fit, the case shouldn't be too large (so no OCZ Throttles). Consistency is also a major priority; we'd like to be able to buy more or less the same product a year or two from now without having to worry about the manufacturer swapping drive components with less reliable or slower parts. (We've been burned already by our previous manufacturer doing this.) Any recommendations, especially regarding consistency? I can read Ars Technica to get an overview of current models, but what are consistently good models?

    Read the article

  • Will NTP work in an isolated network ( in absence of a reliable time source)

    - by Anand
    Hi, I am investigatiing a typical NTP problem. The setup is as follows :- FreeBSD is being compiled and run on Opensolaris. The config file on OpenSolaris has entry of linux and another opensolaris machine as server and these server machines are syncing time with themselves (local clock) only. The server machines in this case have NTP running on them as well. Within a few minutes of starting the ntp daemon ,client starts syncing time with itself only and remains in that situation after that.All servers are discarded and no time syncing is done with them. My question is , is there any fundamental problem with this setup. Will the NTP work in such isloated network that has no direct or indirect connection with reliable internet time source ?

    Read the article

  • Most reliable split character

    - by JL
    Update If you were forced to use a single char on a split method, which char would be the most reliable? Definition of reliable: a split character that is not part of the individual sub strings being split.

    Read the article

  • I want to "image" 40+ laptops quickly...i welcome suggestions on reliable software

    - by Joldfield101
    I often have batches of laptops/PC's to re-image and have tried various methods, but each of them has been problematic and often take more time to troubleshoot than it would have been to image them individually! For example, i have tried to use ghost - i installed ghostcast server on my laptop but the clients never seem to boot to LAN successfully, or it takes an hour to get everything sorted (drivers, LAN, DHCP etc etc). I want a reliable tool that makes imaging quick and easy - and i don't mind paying for it if it's going to work (but obviously free = always good!)

    Read the article

  • most reliable linux terminal app / general procedures for process stability

    - by intuited
    I've been using konsole (KDE 4.2) for a while now but it crashed recently. Konsole is efficiently designed to use one instance for all of the windows for your entire X session. Extra-unfortunately, because of this ingenuity the crash brought down all the humpty-dumptys and their bashes and their bashes' applications and all the begattens' begattens all the way down to Jebodiah Springfield into one big flat nonexistent omelette. The fact that this app is capable of crashing under any circumstances is pretty disappointing. Although KDE 4.2 is not expected to be entirely stable -- and yes, I know, I should update my distro -- it's still a no-sell for me, since if at all possible, this sort of thing Shouldn't Happen to something that's likely to be a foundation for an entire working environment. Maybe this is arrogant and unrealistic, but if it's possible to have something more stable, I want it. So other than running under screen -- which is fun, nifty, and thus far flawless in its reliability, but which has some issues with not understanding certain keycodes -- I'm looking for ways to improve my environment's reliability. The most obvious strategy is to cast about for a more reliable console app. A standard featureset -- which to me includes tabbed windows, Unicode support, and a decent level of keyboard shortcut configuration -- is pretty much essential. I'm currently running gnome-terminal and roxterm, both of which have acceptable featuresets (pretty much identical, actually; I think rox is actually the superset), and neither of which have provided me with extensive, objective reliability data. Not that they were expected to. Other strategies are also welcome. Were I responding to this question I would perhaps suggest backgrounding critical tasks with & and/or disowning them so they don't come down with the global pandemic. And stuff like that.

    Read the article

  • Reliable router with good VPN and WAN Throughput [closed]

    - by Asdande
    I have 2 cisco rv180 VPN router. These routers are giving me lots of problems. The webpages wont load correctly, slow response to load webpages plus other many issues. I have several cases pending with cisco. I give up on these routers. I would like to know if you guys can recommend me a reliable router for our 3 branches (NY - main, SC and FL). In NY- main office, we have 55 users. In SC branch, 6 users. In Florida we only have 1 (will grow soon). I need a router capable of support: 3 VPNs Site-to-Site connection VPN throughput of at least 40-50 Mbps WAN throughput at least 100 Mpbs and up PPTP Server for at least 5 PPTP users Web filtering - all users need access to internet Good Firewall Port forwarding for FTP Server - able to show the public IPs of FTP users (rv180 cannot do that, just shows me router's LAN interface IP, opened a case with cisco, now escaleted to level 2, still no answer or workaround) Dual WAN ports for balance or backup internet. Gigabit WAN/LAN ports Price between $400-$500 range. I was thinking on the TP-LINK TL-ER6120 or TL-ER6020 according to the review on smallnetbuilder.com http://www.smallnetbuilder.com/lanwan/lanwan-reviews/31983-tp-link-tl-er6020-safestream-gigabit-dual-wan-vpn-router-reviewed but I don't want to make another mistake as I did when I bought the cisco RV180. Thank you in advance,

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >