Search Results

Search found 42453 results on 1699 pages for 'question'.

Page 307/1699 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Embedded Model Designing -- top down or bottom up?

    - by Jeff
    I am trying to learn RoR and develop a webapp. I have a few models I have thought of for this app, and they are fairly embedded. For example (please excuse my lack of RoR syntax): Model: textbook title:string type:string has_many: chapters Model: chapter content:text has_one: review_section Model: review_section title:string has_many: questions has_many: answers , through :questions Model: questions ... Model: answers ... My question is, with the example I gave, should I start at the top model (textbook) or the bottom most (answers)?

    Read the article

  • Calculating 2D (screen) coordinates from 3D positions in XNA 4.0

    - by NDraskovic
    I have a program that draws some items to the scene by loading their positions from a file. Now I want to place a Ray on the same location where the items are drawn. So my question is how can I calculate the position of the ray (it's 2D components) by using 3D coordinates of each particular item? The items don't move anywhere, so once they are placed they stay until the end of the programs execution. Thanks.

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How does rc job work / order of (contradicting) "start on ..." and "stop on ..." stanzas

    - by Binarus
    Hi, I just can't understand how Upstart's rc job definition in Natty 11.04 works. To illustrate the problem, here is the definition (empty lines and comments are left out): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL Let's suppose we currently are in runlevel 2 and the rc job is stopped (that is exactly the situation after booting my box and logging in via SSH). Now, let's assume that the system switches to runlevel 3, for example due to a command like "telinit 3" given by root. What will happen to the rc job? Obviously, the rc job will be started since it is currently stopped and the event runlevel 3 is matching the start events. But from now on, things are unclear to me: According to the manual $RUNLEVEL evaluates to the new runlevel when the job is started (that means 3 in our example). Therefore, the next stanza "stop on runlevel [!$RUNLEVEL]" translates to "stop on runlevel [!3]"; that means we have a first stanza which will trigger the job, but the second stanza will never stop the job and seems to be useless. Since I know that the Ubuntu / Upstart people won't do useless things, I must be heavily misunderstanding something. I would be grateful for any explanation. While trying to understand this, an additional question came to my mind. If I had contradicting start and stop triggers, for example start on foo stop on foo what would happen? I swear I never will do that, but I am nevertheless very interested in how Upstart handles that on the theoretical level. Thank you very much! Editing the question as a reaction on geekosaur's first answer: I can see the parallelism, but it is not that easy (at least, not to me). Let's assume the job aurrently is still running, and a new runlevel event comes in (of course, the new runlevel is different from the current one). Then, the following should happen: 1) The job is single instance. That means that "start on ..." won't be triggered since the job is currently running; $RUNLEVEL is not touched. 2) "stop on ..." will be triggered since the new runlevel is different from $RUNLEVEL, so the job will be aborted. 3) Now, the job is stopped and waiting. I can't see how it is restarted with the new runlevel. AFAIK, initctl emits events only once, so "start on ..." won't be triggered and the new runlevel won't be entered. I know that I still misunderstanding something, and I am grateful for explanations. Thank you very much!

    Read the article

  • Who is the enemy of FLOSS today?

    <b>michuk says:</b> "The question in the title of this post was raised by myself recently, after I decided to accept a paid Microsoft ad on one of my Linux websites: http://jakilinux.org and (obviously) I was called a traitor."

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • SQL Server Begin Try

    - by Derek Dieter
    The try catch methodology of programming is a great innovation for SQL 2005+. The first question you should ask yourself before using Try/Catch should be “why?”. Why am I going to use Try/Catch? Personally, I have found a few uses, however I must say I do fall into the category of not [...]

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • Ubuntu 12.04.3 - Graphics Driver: Default vs Nvidia 319-recommended vs Nvidia 319-updated

    - by Navraj
    Background: I switched from default driver to Nvidia-319-recommended. I am guessing that this update has caused issues with Keyboard shortcuts, battery status icon disappearing as well as power management issues as speculated by others. Closing laptop lid no longer suspends laptop - It has to be manually done by licking 'suspend' before closing lid. Question: How do you restore the original/default graphics driver? Thanks for your help. Regards

    Read the article

  • Controlling text that appears in google search results

    - by Mick
    I have recently made a simple (pure HTML) website. The most important key phrase that I want to capture is "full reserve banking". Currently, if I type "full reserve banking" (without quotes) into google, then my site appears as the 7th item on the first page. I am reasonably happy with this as the site is so new. But one frustration is that the text that google displays in relation to my site is rather misleading. The main message I would like to get across is that my site is "A collection of resources for anyone interested in this alternative monetary system." and I have this as the first line of text on the page. Unfortunately, this important sentence is nowhere to be seen in the google search result. So my question is - is there anything I can do to fix this error? Edit: I noticed that someone edited this question to remove the name of the website. I was very keen to leave it in because being able to look at it makes it far easier to diagnose what I did wrong. Indeed the answer suggested by "Su" clearly shows that they looked at my website and analyzed what it was doing which helped them give a clearer answer. If I am breaking some policy by including the name then please explain what this policy is in a comment. Edit: I have now made a series of changes to my meta descriptions as inspired by the answers given here. On the homepage I now have the text: <META NAME="description" CONTENT="A collection of resources for anyone interested in Full Reserve Banking. What it is, how it works, web resources, organisations, research papers etc."> I am now very excited to see what will happen after the next visit by the google robots. Edit: Result! I just did a google search for "full reserve banking", and the text that appeared was: Full Reserve Banking: The definitive resource. A collection of resources for anyone interested in Full Reserve Banking. What it is, how it works, web resources, organisations, research papers etc. www.fullreservebanking.com/ - Cached By the way, I did originally have a meta description - but it was too short, it just said "full reserve banking". Google obviously assumed this was too little and so chose to use its own algorithms to cook up a different sentence from the main text.

    Read the article

  • Cant make a Usb booteable ubuntu 13.10

    - by Eli Chévere
    Hey that my second question.. Sorry for my english! I tried to make a usb booteable with ubuntu 13.10. Thats the problem everytime that i tried to boot it came an error... syslinux no default or ui configuration directive found. Before post an answer i already tried renamed isolinux etc (folder and cfg n bin) with syslinux and tried too using yumi, universal boot and live linux... Thanks for ur help!

    Read the article

  • Un nouveau navigateur de Mozilla pour iOS ? Il pourrait s'inspirer d'Opera Mini

    Un nouveau navigateur de Mozilla pour iOS ? Il pourrait s'inspirer d'Opera Mini, mais rien ne semble sûr Matt Brubeck, un des développeurs du navigateur Firefox Mobile (aussi connu sous le nom de Fennec), s'est exprimé sur la situation du navigateur Firefox dans les environnements iOS d'Apple et se faisant, a amorcé une rumeur sur l'éventuel développement d'un autre navigateur de Mozilla pour les systèmes mobiles d'Apple. En réponse à une question posée sur la plateforme de questions-réponses Quora, Brubeck a co...

    Read the article

  • Does Hard Drive Orientation Affect Its Lifespan?

    - by Jason Fitzpatrick
    Many cases allow you to mount drives in vertical or horizontal configurations and external drives can be easily repositioned. Does the orientation of the hard drive affect the performance and longevity of the drive? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • should I learn html/css before php even for using database? [on hold]

    - by Sadegh
    I saw lots of question about this topic and all of them were talking if someone want to use php for "building web pages", should learn html first or not. and most of them said yes, because most of the time you make web page with both php and html (and maybe css). But If I just want to use php for contacting to My Database (for example MySQL) and nothing more, shuold I learn any html or CSS first or not?

    Read the article

  • airplanes operating system and choice of programing language

    - by adhg
    I was wondring if anyone knows what is the operating system used in commercial airplanes (say Boeing or Airbus). Also, what is the (preferred) real-time programing language? I heard that Ada is used in Boeing, so my question is - why Ada? what are the criteria the Boeing-guys had to choose this language? (I guess Java wouldn't be a great choice if the exactly in lift off the garbage collector wakes up). Thanks!

    Read the article

  • How to Choose a Good PHP Developer

    Hiring PHP developers are turning out to be the best option to get effective, fast and perfect PHP development. However, the key question remains - "how to choose a good PHP developer". Today there seem to be lots of PHP developers easily available. In this article, learn more about few essential tips that can help you in choosing a good PHP developer.

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • SQL Server Manageability Series: how to change the default path of .cache files of a data collector? #sql #mdw #dba

    - by ssqa.net
    How to change the default path of .cache files of a data collector after the Management Data Warehouse (MDW has been setup? This was the question asked by one of the DBAs in a client's place, instantly I enquired that were there any folder specified while setting up the MDW and obvious answer was no as there were left default. This means all the .CACHE files are stored under %C\TEMP directory which may post out of disk space problem on the server where the MDW is setup to collect. Going back...(read more)

    Read the article

  • how do I set up a double domain?

    - by kdavis8
    I would like to set up a server similar to Google's. Their domain acts like a double domain, like you can use these URLS, "play.Google.com" or "apps.Google.com", to go to different sites.. For example, my domain would now be "my_domain.com" but i would like another one to be "domain2.my_domain.com". My question is,what is this officially called and how do i set it up? I'm not sure if you need two servers or just 1;

    Read the article

  • How many programmers are female? [closed]

    - by Cawas
    Let's just assume gender and sex do matter and this question isn't so pointless as some may say. I believe gender distribution do say a lot about any given job although I find it very hard to explain why. So, is there any source on the web we can use to have a plain high number referencing female versus male programmers on any given space (country, community, company, etc)? Not asking why nor anything else. Just statistical numbers.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Preferred way to render text in OpenGL

    - by dukeofgaming
    Hi, I'm about tu pick up computer graphics once again for an university project. For a previous project I used a library called ftgl that didn't leave me quite satisfied as it felt kind of heavy (I tried all rendering techniques, text rendering didn't scale very well). My question is, is there a good and efficient library for this?, if not, what would be the way to implement fast but nice looking text?. Some intended uses are: Floating object/character labels Dialogues Menus HUD Regards and thanks in advance. EDIT: Preferrably that it can load fonts

    Read the article

  • How do you explain refactoring to a non-technical person?

    - by Benjol
    (This question was inspired by the most-voted answer here) How do you go about explaining refactoring (and technical debt) to a non-technical person (typically a PHB or customer)? ("What, it's going to cost me a month of your work with no visible difference?!") UPDATE Thanks for all the answers so far, I think this list will provide several useful analogies to which we can point the appropriate people (though editing out references to PHBs may be wise!)

    Read the article

  • Grid Favicon on [Scammy] Websites [closed]

    - by Kevin Dolan
    I've seen this grid favicon show up on a lot of sites, most of which tend to be for scams like oxytocin accelerator, or make $300 a day posting links to Google type sites. My question is: what is this icon and where does it come from? Is there some organization whose goal is to make terrible websites like this and they associate them with this icon or does it belong to some server software that for some reason scammy sites like to use? Does anybody know the origins of this icon?

    Read the article

  • Software Architecture and MEF composition location

    - by Leonardo
    Introduction My software (a bunch of webapi's) consist of 4 projects: Core, FrontWebApi, Library and Administration. Library is a code library project that consists of only interfaces and enumerators. All my classes in other projects inherit from at least one interface, and this interface is in the library. Generally speaking, my interfaces define either Entities, Repositories or Controllers. This project references no other project or any special dlls... just the regular .Net stuff... Core is a class-library project where concrete implementation of Entities and Repositories. In some cases i have more than 1 implementation for a Repository (ex: one for azure table storage and one for regular Sql). This project handles the intelligence (business rules mostly) and persistence, and it references only the Library. FrontWebApi is a ASP.NET MVC 4 WebApi project that implements the controllers interfaces to handle web-requests (from a mobile native app)... It references the Core and the Library. Administration is a code-library project that represents a "optional-module", meaning: if it is present, it provides extra-features (such as Access Control Lists) to the application, but if its not, no problem. Administration is also only referencing the Library and implementing concrete classes of a few interfaces such as "IAccessControlEntry"... I intend to make this available with a "setup" that will create any required database table or anything like that. But it is important to notice that the Core has no reference to this project... Development Now, in order to have a decoupled code I decide to use IoC and because this is a small project, I decided to do it using MEF, specially because of its advertised "composition" capabilities. I arranged all the imports/exports and constructors and everything, but something is quite not perfect in my "mental-visualisation": Main Question Where should I "Compose" the objects? I mean: Technically, the only place where real implementation access is required is in the Repositories, because in order to retrieve data from wherever, entities instances will be necessary, and in all other places. The repositories could also provide a public "GetCleanInstanceOf()" right? Then all other places will be just fine working with the interfaces instead of concrete classes... Secondary Question Should "Administration" implement the concrete object for "IAccessControlGeneralRepository" or the Core should?

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >