Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 348/2753 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Wifi connection frequently dropping in hotspots and university campus, home Wifi works fine.

    - by Olivier Lalonde
    For some reason, my Wifi connection frequently drops everywhere except at home. I didn't have this problem with Windows 7 so I guess it's not a hardware problem. My best guess so far is that my connection timeout is very low so if my connection isn't able to reach the router after a few second, the connection drops. Is that likely to be the problem? If so, how could I fix this? Otherwise, what would be an alternative cause for this strange behavior?

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • How to get more detailed reason for Thunderbird crashing?

    - by Nick
    I have a computer (10.04) that was previously running Thunderbird 12.0.1 just fine (installed via apt-get via official PPA). I don't know what happened at the time the problem started since this is a multi-user computer and I was not here. However, every time we try to launch TB, we immediately get a dialog that says: We're Sorry Thunderbird had a problem and crashed. Things I've tried: Running thunderbird from a terminal produces no output. I tried apt-get remove thunderbird --purge and then reinstalled Deleting the user's .thunderbird folder and launching still results in a crash Attempting to run thunderbird -safe-mode still results in a crash This problem occurs for all users of Thunderbird on this computer. Is there any way to get more details on why the program is crashing? For example, the specific error that TB is encountering? I tried thunderbird -g but I'm not sure what to do with the debugger.

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    Hello, I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Ubuntu 11.10 cannot boot. It stucks at BusyBox

    - by Ivan Dokov
    I am using Ubuntu 11.10. An hour ago I had my laptop Sony Vaio VPCEB1S1E running. I saw there are updates to install and I installed them. Turned off the laptop and now when I want to turn it on. It loads until BusyBox v1.18.4 appear. I've saw what the people suggest in other askubuntu topics. I've booted Puppy Linux from USB, repaired the partition where the Ubuntu is installed. Rebooted and nothing changed. I saw other suggestions like writing "exit" in the command line when the BusyBox comes. This didn't help neither. I love the Ubuntu OS, but these days I get similar problem with not able to boot OS. The last times I could repair it with Gparted, but then it wasn't problem with the BusyBox, it was something missing in the OS, like "cannot boot /". The same problem occurred on an older version of Ubuntu 10.10 and there I've repaired it again with Gparted.

    Read the article

  • Error 255 samba share simple file share Ubuntu 14.04

    - by Rose Offthorns
    I have been using simple file share on Ubuntu 12.04 for several years not a problem now I have up graded to 14.04 all the problems error 255, tried all sights to fix the problem nothing works even went back to 12.04 still the same problem error 255 'net usershare' returned error 255: net usershare add: cannot convert name "Everyone" to a SID. The connection was refused. Maybe smbd is not running. There appears to be a bug with the new upgrade or has there been a new upgrade. Thanks for any help would be appreciated.

    Read the article

  • How to program for constraints/rules

    - by Gaurav
    First the background, during interviews in the past, many times I have been asked to design some or other variation of card game as programming puzzle, and I have tried to design it in OO way, but I have never been satisfied with my solutions. However it was not until recently that I realized that I had been approaching the problem from the wrong direction. Specifically I was trying to solve the problem by modeling individual card as an object. Problem with this is individual cards don't have any non-trivial intrinsic behavior and therefore are not suitable (or primary) candidate as objects. What is interesting and important about cards are rules and constraints, such as there could be only four suits, or only thirteen cards in each suit. Of course, then there are any number of rules for games. So my questions are Are there any idioms/constructs/patterns to program for rules & constraints. How many in 1 can be applied in conjunction with OO paradigm.

    Read the article

  • Can't Log in to Lubuntu 12.04 X Server

    - by isomorphismes
    As of rebooting yesterday I can't login as myself to the X server part of 64-bit Lubuntu 12.04. Same problem as Can not get passed the login screen but that solution didn't work for me. Troubleshooting steps I already took: I can log in as guest (with whatever window manager) to the graphic (X) view of Lubuntu. log in as myself into a virtual terminal. (In fact I'm writing this from w3m for that reason.) So I know my password is correct and that most aspects of the system are working. One of the top google results for "can't log into lubuntu" mentioned a disk-full problem on netbooks; I don't have that problem. Let me know if I need to paste any messages or config files to make this question clearer and I'll do so. $ ls -l /home total 12 drwxr-xr-x 99 me me 12288 May 26 14:16 me $ ls -ld /tmp drwxrwxrwt 16 root root 4096 May 26 15:46 /tmp

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • There is other ways to install Ubuntu? (not wubi, not live CD)

    - by Mauricio Andrés
    I had problems while installing ubuntu 12.04 on a samsung laptop, the problem is the AHCI sysmtem, after a lot of searching, I found that this is almost impossible to "fix" and the only way i found is too much work. I want to install Ubuntu in the 110GB free partition of my hard drive, along with windows. I have a 150GB Windows partition, a 200GB documents partition, and I want to use 110 GB for Ubuntu. The problem is that with the liveCD the installer and gparted shows that my entire hard drive is unallocated (the problem of AHCI). The only way to fix this is to do a lot of work, with a lot of risk, so the question is whether I can install Ubuntu without using either the LiveCD or Wubi.

    Read the article

  • Android Cocos2DX using C++ in Eclipse Helios Windows XP

    - by 25061987
    I have used Eclipse Helios 3.6.1 for Java development. I wanted to start C++ development in the same IDE so I installed Autotools Support For CDT, C/C++ Development Tools, C/C++ Library API Documentation Hover Help plugins.I have included #include "cocos2d.h" in my HelloWorldScene.h file now when writing the below statement cocos2d::CCSprite * ccSprite; I am not getting auto completion bar(template proposals) on writing like coco and pressing Ctrl + Space from my keyboard. What can be the problem?This might help you solve my problem. Please check here. This is what I got after clicking Right Click Project - Index - Search for Unresolved Index. But I have added all includes check here. I think this is causing problem in Content Assist. What should I do in this case? Inclusion seems proper.

    Read the article

  • Inspiron N7110 Ubuntu 12.04 Poor WiFi Signal

    - by Joseph Risley
    Sorry if this is a repeat, I have been Googling possible answers and have not found one yet. I find my wireless signal is never 100%. Speed is fine, it's the actual signal strength that is the issue. I thought my router was the issue, but the problem was also present at the public library today. I asked the Windows and Mac users around me about their signal strength and they had full signal while mine was medium to low according to WiFiRadar. Is this a Dell problem (Realtek), or an Ubuntu problem I can fix in the terminal?

    Read the article

  • How does a website like Mathway work?

    - by Bob
    I recently found a website called Mathway Basically, it works by allowing you to choose your "level of math" (which it uses to determine what tools it should provide to you) and then allows you to input a math problem which it then solves for you, and gives you detailed solutions (you have to try it, it's really cool). I was wondering how it worked on two levels. First off, how would they parse the math problem (and all the sometimes foreign mathematical operators)? How do they get from text to numbers, variables, and operators? Second, how do they generate the explanations? While you have to pay for the detailed solutions (which are explanations of how they solved the problem), I've seen their preview screenshots, and it looks very detailed. The explanations are given in full, accurate sentences. How would they generate something like that?

    Read the article

  • Project Euler 52: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 52.  Compared to Problem 51, this problem was a snap. Brute force and pretty quick… As always, any feedback is welcome. # Euler 52 # http://projecteuler.net/index.php?section=problems&id=52 # It can be seen that the number, 125874, and its double, # 251748, contain exactly the same digits, but in a # different order. # # Find the smallest positive integer, x, such that 2x, 3x, # 4x, 5x, and 6x, contain the same digits. timer_start = Time.now def contains_same_digits?(n) value = (n*2).to_s.split(//).uniq.sort.join 3.upto(6) do |i| return false if (n*i).to_s.split(//).uniq.sort.join != value end true end i = 100_000 answer = 0 while answer == 0 answer = i if contains_same_digits?(i) i+=1 end puts answer puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • fat32 partition lock

    - by gsedej
    Hi! A am asking about problem with USB data stick (that uses fat32 file system). If you unplug USB stick without unmounting (safly remove) data may become locked when you mount USB stick another time (you can't make changes to files). If you unmount and mount partition few times, data becomes normally accessable. Problem is that I can not repeat (force) this problem now. But it has happend many times even recently. Has this been happening to somoeone else?

    Read the article

  • DDD: Service or Repository

    - by tikhop
    I am developing an app in DDD manner. And I have a little problem with it. I have a Fare (airline fare) and FareRepository objects. And at some point I should load additional fare information and set this information to existing Fare. I guess that I need to create an Application Service (FareAdditionalInformationService) that will deal with obtaining data from the server and than update existing Fare. However, some people said me that it is necessary to use FareRepository for this problem. I don't know wich place is better for my problem Service or Repository.

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Bug unsubscribing from Ubuntu One Mobile

    - by rhino
    Hi guys, I have an Ubuntu One Mobile subscription, which I can see in my subscriptions page: one.ubuntu.com/account/subscription/756082 I no longer need my Ubuntu One Mobile subscription, so click the link to cancel the Mobile service subscription: one.ubuntu.com/account/cancel/756082/ Then confirm that request to cancel: one.ubuntu.com/account/cancel/756082/confirm/ But the process ends there showing a "Something has gone wrong page", and my subscription remains active :( The same problem occurred when I attempted the same a few weeks back, so not a temporary problem I'm thinking. Any input gratefully received. I would like to report this problem directly to the maintainer of this part of the Ubuntu site but cannot see how to do that.

    Read the article

  • T-SQL Jokes

    - by Tomaz.tsql
    SQL Table walks to a psychiatrist dr. Index Table: "Doctor, I have a problem" Dr: "what kind a problem?" Table: "I'm a mess. I have things all over the place, i always look for my stuff" Dr. "No problem. I will get you in order". Index and table are reading a book "index-sutra" Table: Oh, baby tonight we can try a clustered position" Index: "yeah baby, we can also try covered position" Table: "or maybe multiple clustered position"...(read more)

    Read the article

  • Mouse and Keyboard Freeze

    - by kev
    I installed Ubuntu 10.10 today and have had mouse problem since. Symptoms: At some arbitrary point in time (frequency: 2-3 times per hour), the mouse and keyboard stops working for ever(may be). I start System monitor, I found out network was shutdown just before mouse freeze. Some time my keyboard keep typing one key. For example:77777777777777777777777777777777777777777777777777777.....(it keep typing for 20 sec) I found out a script just solve the freeze problem:(I hit Powerbutton) -----------------/etc/acpi/powerbtn.sh------------------------ event=button[ /]power action=/usr/sbin/fix_mouse.sh -----------------/usr/sbin/fix_mouse.sh------------------------ rmmod psmouse modprobe psmouse Yesterday I install Ubuntu 10.04 FAILED also have mouse problem. When I switch back to Windows XP. The network card is down. It kept connecting and disconnecting 1 time per sec. CPU: i5 Motherboard: ASUS P7P55D OS: Windows XP + Ubuntu 10.10 Video Card: ATI 5770 Mouse,Keyboard: PS/2

    Read the article

  • Theoretically bug-free programs

    - by user2443423
    I have read lot of articles which state that code can't be bug-free, and they are talking about these theorems: Halting problem Gödel's incompleteness theorem Rice's theorem Actually Rice's theorem looks like an implication of the halting problem and the halting problem is in close relationship with Gödel's incompleteness theorem. Does this imply that every program will have at least one unintended behavior? Or does it mean that it's not possible to write code to verify it? What about recursive checking? Let's assume that I have two programs. Both of them have bugs, but they don't share the same bug. What will happen if I run them concurrently? And of course most of discussions talked about Turing machines. What about linear-bounded automation (real computers)?

    Read the article

  • embedding LEFT OUTER JOIN within INNER JOIN

    - by user3424954
    I am having some problems with one of the question's answered in the book "SQL FOR MERE MORTALS". Here is the problem statement Here is the Database Structure Here is the answer which I am unable to comprehend Here is an answer which looks perfect to me Now the problem with the first answer I am having is: We first use LEFT OUTER JOIN for recipe class and recipes. So it selects all recipe class rows but only matching recipes. Perfecty fine as the question is demanding. Lets call this result set R. Now in the next step when we use INNER JOIN to join RecipieIngridients, it should filter out the rows from R in which Recipie ID doesn't match with the Recipe Id in Recipie Ingredients and hence filtering out the related Recipe class and recipe description also(Since it filters out the entire row of R). So this contradicts with the problem which demands all recipieID and RecipieDescription to be displayed from Recipe_Classes Table in this very step only. How can it be correct. Or Am i Missing some concept.

    Read the article

  • Designing Algorithm Flowchart Application

    - by l46kok
    I need to develop an GUI application in C# where users can freely add conditional/statement blocks on the algorithm flowchart like the one shown below. By freely, I mean users can add a block on wherever the arrows are. I'm having some problems brainstorming how to approach this problem, especially what to choose for my datastructure to store the blocks. I was thinking LinkedList since everything follows a linear fashion and every node always has a head and tail, but the If/Else block (ba) has two branches (heads) to store, so this complicates things a little bit. How would a smart one approach problems like this? My apologies if this question isn't suited for Programmers stackexchange, but this is more of a conceptual problem rather than implementation problem so I figured this place was appropriate for the question.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >