Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 134/307 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Shared SQL Server 2008

    - by nazaf
    Hi, I have a Windows Hyper VPS plan with 1024 MB of RAM. After installing SQL Server 2008 Express, my memory usage went up to 75% without running my site yet. I know that SQL Server consumes a lot of memory, so I decided to host my DB on a shared server. Which of the following is more scalable: install my DB on my VPS, or on a shared server ? If the latter, then can you recommend me a good shared server? Thanks.

    Read the article

  • Episode by episode DVD Rip to AVI.

    - by ProfKaos
    I'm looking for a tool to rip DVD files to AVI files. VidCode looks cool, but it wants to convert the whole DVD (all files in the VIDEO_TS folder) to one AVI. I would like to pull each episode on the DVD to its own AVI. Is their a way to that with VidCode, or another ripper? My Plan B for VidCode is to manually separate files for each episode into their own VIDEO_TS folders. Is this possible? Is there an easy, lazy way to automate this?

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • Help with running crontab from root

    - by user242065
    Im using OSX and having trouble getting a cron job to run. I type the following: $ sudo -i $ crontab -e I then enter: * * * * * root ifconfig en0 down > /dev/null 0 19 * * * root ifconfig en0 down > /dev/null 0 7 * * * root ifconfig en0 up > /dev/null and no success, the first line is for testing. I want it to shut off my internet. The next two lines I plan to leave in, once I get this working. If I type this in to the terminal the internet goes off ifconfig en0 down Why is my cron job not shutting down the internet? FYI: This is a follow up question from http://stackoverflow.com/questions/3027362/how-can-i-write-a-cron-job-that-will-block-my-internet-from-7pm-to-7am-so-i-can most of the comments there are people making fun of me. And a few attempts to solve the problem with out cron jobs.

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • Why do I get "Sequence contains no elements"?

    - by Gary McGill
    NOTE: see edits at bottom. I am an idiot. I had the following code to process set of tag names and identify/process new ones: IEnumberable<string> tagNames = GetTagNames(); List<Tag> allTags = GetAllTags(); var newTagNames = tagNames.Where(n => !allTags.Any(t => t.Name == n)); foreach (var tagName in newTagNames) { // ... } ...and this worked fine, except that it failed to deal with cases where there's a tag called "Foo" and the list contains "foo". In other words, it wasn't doing a case-insensitive comparison. I changed the test to use a case-insensitive comparison, as follows: var newTagNames = tagNames.Where(n => !allTags.Any(t => t.Name.Equals(n, StringComparison.InvariantCultureIgnoreCase))); ... and suddenly I get an exception thrown when the foreach runs (and calls MoveNext on) newTagNames. The exception says: Sequence has no elements I'm confused by this. Why would foreach insist on the sequence being non-empty? I'd expect to see that error if I was calling First(), but not when using foreach? EDIT: more info. This is getting weirder by the minute. Because my code is in an async method, and I'm superstitious, I decided that there was too much "distance" between the point at which the exception is raised, and the point at which it's caught and reported. So, I put a try/catch around the offending code, in the hope of verifying that the exception being thrown really was what I thought it was. So now I can step through in the debugger to the foreach line, I can verify that the sequence is empty, and I can step right up to the bit where the debugger highlights the word "in". One more step, and I'm in my exception handler. But, not the exception handler I just added, no! It lands in my outermost exception handler, without visiting my recently-added one! It doesn't match catch (Exception ex) and nor does it match a plain catch. (I did also put in a finally, and verified that it does visit that on the way out). I've always taken it on faith that an Exception handler such as those would catch any exception. I'm scared now. I need an adult. EDIT 2: OK, so um, false alarm... The exception was not being caught by my local try/catch simply because it was not being raised by the code I thought. As I said above, I watched the execution in the debugger jump from the "in" of the foreach straight to the outer exception handler, hence my (wrong) assumption that that was where the error lay. However, with the empty enumeration, that was simply the last statement executed within the function, and for some reason the debugger did not show me the step out of the function or the execution of the next statement at the point of call - which was in fact the one causing the error. Apologies to all those who responded, and if you would like to create an answer saying that I am an idoit, I will gladly accept it. That is, if I ever show my face on SO again...

    Read the article

  • Samba - Is my server vulnerable to CVE-2008-1105?

    - by Joao Heleno
    Hi! I have a CentOS server that is running Samba and I want to verify the vulnerability addressed by CVE-2008-1105. What scenarios can I build in order to run the exploit that is mentioned in http://secunia.com/advisories/cve_reference/CVE-2008-1105/? http://secunia.com/secunia_research/2008-20/advisory/ says that "Successful exploitation allows execution of arbitrary code by tricking a user into connecting to a malicious server (e.g. by clicking an "smb://" link) or by sending specially crafted packets to an "nmbd" server configured as a local or domain master browser." More info: http://www.samba.org/samba/security/CVE-2008-1105.html http://secunia.com/secunia_research/2008-20/advisory/

    Read the article

  • How can I synchronize Google contacts and calendar with Palm Centro and Outlook 2000?

    - by meomaxy
    Seeking a free solution to synchronize my calendar and contacts between my Palm Centro and Google contacts and calendar. I currently sync my Palm contacts and calendar with Outlook 2000 on Windows XP. Google Calendar Sync does not support Outlook 2000, and I would rather not pay to upgrade Outlook just for this, as I don't even use Outlook much anymore. I wouldn't mind migrating my calendar and contacts off of it, as long as I can still sync with the Palm. I don't have an unlimited data plan for my Centro, so I can't use a solution that wirelessly syncs directly to Google. I have set up GCALDaemon to sync up my Google calendar with multiple machines running Rainlendar, but I have not yet figured out a way to synchronize that data with the Palm.

    Read the article

  • How can I download django-1.2 and use it across multiple sites when the system default is 1.1?

    - by meder
    I'm on Debian Lenny and the latest backports django is 1.1.1 final. I don't want to use sid so I probably have to download django. I have my sites located at: /www/ and I plan on using mod_wsgi with Apache2 as a reverse proxy from nginx. Now that I downloaded pip and virtualenv through pip, can someone explain how I could get my /www/ sites which are yet to be made to all use django-1.2? Question 1.1: Where do you suggest I download django-1.2? I know you can store it anywhere but where would you store it? Question 1.2: After installing it how do you actually tie that django-1.2 instead of the system default django 1.2 to the reverse proxied Apache conf? I would prefer it if answers were more specific than vague and have examples of setups.

    Read the article

  • keeping URL domain the same when pointing A record to a hosting account

    - by kwight
    Hello, I have a new WordPress website and a legacy billing system. For technical reasons, they cannot be on the same hosting plan. The hosting account for billing (and the original abc.com website) also manages DNS and mail. I'm trying to incorporate the new website under the same domain, eg. abc.com (website, on a different hosting account) and billing.abc.com (billing). I assume the answer is having a different A record for abc.com. I currently have a CPanel shared hosting account to use for the website (but can upgrade if necessary). How would I set this up in CPanel, so that the URLs work properly? Do I need a dedicated IP and then add the domain as an add-on domain? Thanks

    Read the article

  • European wireless data

    - by drewk
    I am on my way to Europe for 20 days with the wife and teenage kids. Among us, we have 3 iPhones, 1 Blackberry, 2 iPad 3G and 2 Macbook Pro laptops. Each has Skype. I am very concerned about AT&T data charges, and I am looking for a sensible alternative. For example, the AT&T data plans for the phones and iPads are $200 EACH for 200 MB per device. That is $1,200 and does not even cover the laptops. Ouch!!! I am thinking about a prepaid data plan from a European cell system of some sort combined with a GSM data modem and a Cradlepoint router, such as the CTR500. So, questions: 1) Anyone know of a GSM data source in Europe where I can buy a GSM Modem compatible with the CTR500 at a reasonable prepaid rate? 2) Experience with the CTR500?

    Read the article

  • Need advise on choosing aws EC2

    - by Mayank
    I'm planning to host a website where in the first phase I would target 30,000 users. It is in php and runs on Apache server. I'm assuming 8,000 users can be online in worst case scenario and 1000 of them will be uploading photographs. A photograph will be resized to around 1MB at client side and one HTTP request is uploading only one photograph. My plan: 2 Small EC2 instances to run Apache httpd 2 Small EC2 instances to DB (Postgresql). I to write data and other its read replica. EBS volumes for DBs Last, Amazon S3 for uploaded photographs. My question here Is Small EC2 instance more than what I require. I mean should I go for micro Is 8000 simultaneous user a right no. (to decide what EC2 instance to choose) for a new website Or should I go for Small instance so to make it capable of spikes

    Read the article

  • How to get both PIPESTATUS and output in bash script

    - by Mustafa Serdar Sanli
    I'm trying to get last modification date of a file with this command TM_LOCAL=`ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }'` TM_LOCAL has value like "2012-05-16 23:18" after execution of this line I'd also like to check PIPESTATUS to see if there was an error. For example if file does not exists, ls returns 2. But $? has value 0 as it has the return value of awk. If I run this command alone, I can check the return value of ls by looking at ${PIPESTATUS[0]} ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }' But $PIPESTATUS does not work as I've expected if I assign the output to a variable as in the first example. In this case, $PIPESTATUS array has only 1 element which is same as $? So, the question is, how can I get both $PIPESTATUS and assign the output to a variable at the same time?

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • Dump output from REPL

    - by Ankit Soni
    I'm writing SML programs, and I'd like a way to quickly see the output from running a program in the REPL without actually running the REPL (to quickly see if a program has syntax errors - I plan to use this as a make program for .sml files in vim to view the output inside vim).. Currently, I have this: sml file.sml | echo -e "\004" So it runs the program, and then echoes Ctrl-D to exit the REPL. The problem is that its too quick to send the Ctrl-D key, so there is no output. I tried this too: sml file.sml | sleep 2 ; echo -e "\004" But that isn't doing it either. Any ideas on how I can get a dump of the output from the REPL?

    Read the article

  • Android tethering question

    - by Solignis
    Hi there, I have a Motorola Droid 1 with Android 2.2 on it. I found something odd that I don't understand why it works. If I plug my phone into my home computer which is Windows 7 and I turn on the tethering ability, I am met with a Verizon captive portal and it tells me I need to pay for a tethering plan (which I don't have). Now the weird thing, if I plug the same phone into my Ubuntu 10.10 laptop and enable tethering it works and can get on the internet with no captive portal showing up. The only thing I could notice that was different about the connections was Windows connected to the phone with an NDIS driver and Linux connected to the phone with what I think is a raw device mapping. Would that have anything to do with it?

    Read the article

  • How to create a Linux Media Server using Ubuntu?

    - by Thomas
    Hello all: I'm an intermediate Linux user and a relative beginner to servers. I would like some help finding resources on setting up a basic server. I have Googled, and am a member of the Ubuntu Forums, but just figure it can't hurt to ask the Stack Overflow community for help as well. I plan on installing on an old laptop (Lenovo Thinkpad R61i or Toshiba Satellite A105). I have downloaded the latest Ubuntu (9.10) but don't know how to do any of the configurations. I just want a server to store my files where I can access (download and/ or stream) from a browser. Any help you can give is greatly appreciated. Thanks! Thomas

    Read the article

  • MSSQL:Ultimate configuration for rebuilding indexes+statistics

    - by Niels Bosma
    My database is growing slower even though I have a bunch of indexes setup. Yesterday I figured out that I need to setup a maintenance plan to build the indexes etc. So my question is what's the ultimate configuration for this? Do I need All: "Rebuild idex task", "Reorganize index task" and "update statistics task". Anything else I need to setup. Shrink database? (Today, the only maintenance plans I have is backup) Does it matter in what order I run them? Any configuration options I should be aware of? I've read of problems with log growing wild, how do I fix that? My transaction log is quite small and is usually a problem for me. -

    Read the article

  • Spoof user agent for GoGo Inflight Internet?

    - by AndyL
    Is it possible to trick the GoGo Inflight WiFi on airlines into thinking that you have a mobile device instead of a laptop? It seems like most airlines that offer in flight wireless these days use GoGo. They offer different pricing for mobile and laptops. It seems like they are checking the browser's user agent. Out of curiosity, is it possible to use a Firefox extension like this one to spoof the user-agent and allow a laptop to access the internet under a GoGo mobile plan? How would GoGo handle something like an IMAP email client, like Thunderbird. Do IMAP clients have a user-agent field as well that would normally identify whether the mail client is running on a laptop or mobile device?

    Read the article

  • How to copy all recovery points to another drive? - Norton Ghost

    - by chobo2
    Hi I have Norton Ghost and I have 2 drives dedicated to backing up my files. One is an external drive the other is internal. Now my internal drive has filled up with backups and I now want to copy all those backups to my external drive. However it seems to want me to do it one by one. Can I do it like a batch or some mass copy so it does one after another so I can say have my computer all night on? My plan was to fill up my internal drive - copy it to my external drive - fill up my external drive - copy it to my external drive. Once my external drive would be filled up I then would start deleting the oldest backups but that probably would give me like 6 months of backs that I can back through if I would ever need too.

    Read the article

  • Is ODBC on Windows 2003 slower than on Windows 7?

    - by nbolton
    I am seeing some MSSQL 2005 performance issues, and I am trying to diagnose the cause. I am using SQL profiler to gather query execution times. Both the client (using ODBC), and the SQL server are running on Windows 2003. I am also using Windows 7 (client) with a different Windows 2003 server to compare results. Windows 7 client / Windows 2003 server: SQL management studio: 393ms Through ODBC: 215ms Windows 2003 client: SQL management studio: approx 155ms Through ODBC: 3145ms ... in both cases, I'm running SQL management studio on the client. To me, these figures suggest there's something wrong with the ODBC client on the Windows 2003 server. On Windows, I see that the ODBC "SQL Server" driver is version 6.01.7600.16385 but on Windows 2003, it is 2000.86.3959.00 (by default). Could this be the problem? Is it possible to update an ODBC driver?

    Read the article

  • DC on Hyper V Host

    - by Saif Khan
    I've read a few similar questions but wasn't clear. I have a small office (13) users and recently purchased a new single server (this is all I have to work with for now). Server specs Memory - 16GB Drives - 6 SCSI (2TB) PROCESSOR - DUAL QUAD I plan on making the Hyper-V host the DC and then 2 VMs, one for a file server and the other an application server. Question - since I am restricted to this single server, would it be a big issue making the Hyper-V host the domain controller? Your input greatly appreciated.

    Read the article

  • How to rebuild a Li Ion laptop battery?

    - by spoulson
    I have an aging Gateway NX560XL laptop. The battery is toast and a new one, even aftermarket, starts at $130. So, to experiment, I began tearing apart the old battery to see what can be done. I found it used 8 standard size 18650 Li Ion cells arranged two cells parallel then in series (like: ====). Some online shopping revealed ~$7-13/ea replacements depending on mAh output. My plan is to load test to determine the bad cells and replace only those, as I read that typically only 1 or 2 may be bad. I'm proficient with soldering, however these cells are attached with welded tabs. Some of them broke during disassembly and I'm not sure how to reattach them. What I found online are cells like these that have solder tabs pre-welded to the ends so I can solder wires onto. Is there any guide available that provides the instructions and parts to do this kind of rebuild?

    Read the article

  • Is it reasonable to use my Time Machine backup to migrate to a new primary hard drive?

    - by Michael Haren
    I'm planning to upgrade my MacBook's harddrive. I already use Time Machine to back up the system to an external drive. Is it reasonable to use Time Machine to restore my system to the new laptop drive, once I install it? I mean, a restore like this really ought to be fine, right? That's the point of it, after all! I know imaging the drive would be more appropriate but this plan seems a whole lot easier (albeit probably slower), with practically no risk since my original drive won't be involved. A second question would then be, are there any considerations to be made when doing a Time Machine restore?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >