Search Results

Search found 10543 results on 422 pages for 'big bang theory'.

Page 210/422 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • SQL Server 2008R2 Express: which is the users limit in a real case scenario?

    - by PressPlayOnTape
    I know that sql server express has not a user limit, and every application has a different way to load/stress the server. But let's take "a typical accounting software", where users input some record, retrieve some data and from time to time they make some custom big queries. May someone share its own experience and tell me which is the limit of users that can realistically use a sql server express instance in this scenario? I am looking for an indicative idea, like (as an example): "I had a company with an average of 40 users logged in and the application was working ok on sql server express, but when the users become 60 the application started to seem non repsonsive" (please note this sentence is pure imagination, I just wrote it as an example).

    Read the article

  • Windows 7 can't copy file - Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved

    - by JJGroover
    Any attempt to copy files larger than about 40 MB from a network share (a SAN running open filer / Samba) to my local machine running Windows 7 always results in the following error and the copy fails: Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved. I've tried copying to my C: drive and a USB drive with the same results. Smaller files copy just fine. Clearly 40 MB is not that big of a file so I'm assuming it is some buggy interaction between windows 7 and Samba perhaps. Google has so far turned up nothing. Can anyone point me in the right direction?

    Read the article

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

  • Solaris TCP/IP performance tuning

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck? PS I posted this on StackOverflow originally. One person suggested snoop and dtrace. dtrace seems pretty general - are there any additional pointers on how to use it to diagnose TCP issues?

    Read the article

  • how to separate a network for traffic

    - by Student_CVO
    At the moment our all computers in one big LAN, it is the intention to separate the admin and edu (it's in a school) especially for traffic and less for security. How do this best? I have a drawing, but can't post it (a can send it in a mail) Firewall?, VLAN?, IPCop (no two green zones)?, pfsense? ... Should there be two scopes on the dhcp server (WIN 2008 R2), one for admin and one for edu or is one scope enough? I would like your advice, I am a student in training with this task as a project. Thanks

    Read the article

  • Suggested benchmark for testing CPU footprint of antivirus software

    - by Alex Chernavsky
    Our organization is currently running Symantec Corporate Antivirus, which is rumored to be a big resource hog. I know that we do have a lot of older machines that are running slow. Our PCs are all running Windows XP Pro and are used only for business applications (mostly Microsoft Office), e-mail, and web surfing. They're not used for gaming (one would hope not, anyway). I'd like to take one of the old PCs and do a speed benchmark test while it's running Symantec AV, then another test with no antivirus, and a third test with ESET NOD32. As I said, I don't care much about graphics performance. What would be an appropriate benchmarking program program to use? Freeware is best, of course. Thank you for considering my question.

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • Filesystem to quickly get recent modifications

    - by liori
    Hello, I've got relatively big filesystem (ext4) with lots of small files and I'd like to backup it. Making full backups often is not feasible to me so I want to have a way to make differential/incremental backups (differential preferred). But... this is laptop, and scanning for changed files takes lots of time. My questions: 1) Is it possible to get list of files changed since some date from ext4's journal? I know it wasn't designed with this idea in mind, and it might be too small for bigger timespans, but maybe it is somehow possible? 2) Is it possible to monitor filesystem modifications and maintain a list of changed files reliably? I think I could use inotify, but this might be too slow to monitor full filesystem and might be unreliable. (by reliable I mean either I get all modifications since last backup (and this list is not missing anything) or an error message). Laptop runs Debian unstable.

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • Moving files from Public folder to C: takes a minute, even though they are same hard drive and same

    - by Jian Lin
    I have a big file, like 2GB, and would like to move it from Network -> Bookroom -> Users -> Public (this is the computer in the bookroom in the house) to c:\myfiles and they are actually on the SAME hard drive (and same partition). But copying still takes a minute or so? I thought if on the same hard drive and partition, then it is a "move" and it should take 2, 3 seconds only. that public folder also is \\Bookroom\Users\Public Update: Sorry, I actually mean "move" all the way... so it is not copy but move. So that's why I thought it should take 2, 3 seconds only.

    Read the article

  • Sharing settings between a few virtual hosts in Apache

    - by Ivan Virabyan
    There are many virtual hosts in my apache configuration, each having quite a big amount of settings. The problem is that most of the virtual servers have the same settings. So config file is huge, full of identical virtual hosts, that differ only by ServerName directive. To change or add setting, I need to go through all of these vhosts. Is it possible to somehow share settings between virtual hosts, but still having few of them with their own ones? I hoped dynamic vhosts would be a good solution, but as I understand, it doesn't fit my problem, because there is no way to set specific settings for some of the vhosts. Furthermore I don't want my vhosts to be dynamic, because I have a fixed number of them. ServerAlias directive is also not a solution, because I need to know what url the user came from.

    Read the article

  • Inexpensive degaussers or HDD shredders?

    - by Nicholas Knight
    I do a lot of work for a small cash-strapped business that has a lot of active hard drives, most are consumer-grade SATA of about five years of age, and predictably they are dying at an increasing rate, and a lot of the time they can't even be detected, let alone complete a zero-out cycle. Right now those drives are just being stored, but that can't continue forever. We've got a couple bad LTO tapes it'd be nice to deal with, too. There are very real security and legal issues that make dropping them off with someone who claims they'll be properly destroyed a gamble. I've looked around at degaussers and HDD shredders, and the ones that don't look like they come from some guy in his basement all seem to be $3000+, which is hard to swallow right now. Is there anything out there in the $500-1500 range that you would recommend? (Speed isn't a big issue, if it takes several minutes or even hours per drive, that's completely OK, we've only got 10 or so thus far.)

    Read the article

  • 504 Gateway Time-out after php fatal error

    - by tiagojsag
    I'm using nginx and php-fpm to develop a symfony2 based website, under ubuntu 12.10 (yes, I know I'm using a beta OS). Everything was working out fine until, due to an error on my code, I called an unexisting function, and got the following: Fatal error: Call to a member function (....) This isn't a problem (it's a bug in my code, easily fixable), but after this, no other page loads. My browser just keeps trying to load the page from the webserver, until nginx timeouts (after +- 30s, which should be some default timeout) and returns: 504 Gateway Time-out Restarting php-fpm solves the issue. Nginx logs show a timeout message, and nothing appears on php-fpm logs, even if I set them to debug level. I tried switching from fpm to fastcgi, and the same thing happens. I've looked around, but all similar error are related to big requests/file handling, which isn't the case. All the pages on my website load in a few seconds, even under development conditions (no caching, etc).

    Read the article

  • What are the cheap CDN for Origin Pull?

    - by DucDigital
    I've read several thread around ServerFault about this, but still I am not satisfy with the answer so I post a question here. I need a Origin Pull CDN that support big file (more than 200MB). I don't need a storage place since they are too small, just to relay the server. Also the price should be afforable, ofcourse not more than 150$ a month for their smallest plan. I also need to pay by credit card since I do not work or stays in the US so it's hard for me to do a bank wire. Thank you very much

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • PostgreSQL RAID configuration

    - by Yoldar-Zi
    I'm stuck how best to configure disk array. We have Hp P2000 G3 disk array with 24 SAS physical disks 300Gb each. We need to configure this array got 2 copies of PostgreSQL 9.2 because two different system. As we know it's recommended to store database and transaction logs(pg_xlog) files on separate disks. So we must setup 4 logical disk: 2 for transaction logs with RAID 1 2 for database with RAID 10 Is this right scheme of distribution? Or may be it is best to just make one big RAID 10 with 4 logical disks?

    Read the article

  • Font rendering in Internet Explorer vs other browsers on Windows XP

    - by Ben McCormack
    I have four browsers installed on my Windows XP SP3 machine: Internet Explorer 8, Firefox 3.5, Safari 4, and Google Chrome. For whatever reason, fonts appeared to be rendered differently in IE than in the other browsers. It seems the fonts are anti-aliased in IE but not in the others. Why might this be? Is this an issue with the browsers or my operating system? I've noticed this issue on several Windows XP machines that I've used. While it may seem like no big deal, the lack of font smoothing in the other browsers keeps me from using them as my primary browser. Most importantly, what can I do to get the other browsers to render fonts smoothly?

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • How can I reduce the CPU usage of Offline Files?

    - by Diego
    Whenever I have the Offline Files service running, there is a constant 25% CPU usage on svchost.exe (this is a quad core, so that means it's using up one core). This, in turn, triples the power consumption and keeps the machine hot... I do have several GB synchronized (music collection), but they are not changing at all, in either side. Am I misusing this feature? Is there anything I can configure to keep it down when there's nothing to do? Or should I forget about it and synchronize big folders manually?

    Read the article

  • Is SSL to the proxy good enough?

    - by Josh Smeaton
    We are currently trying to decide on how best to do SSL traffic in our environment. We have an externally facing Apache proxy server that is responsible for directing all traffic into our environment. It is also doing the SSL work for the majority of our servers. There are one or two IIS servers in particular that are doing their own SSL, but they are also behind the proxy. I'm wondering, is SSL to the proxy good enough? It would mean that traffic within our network is identifiable, but is that such a big deal?

    Read the article

  • Is there a RAR extractor (for multiple rar files like .r00 etc.) that will use all of my quad cores?

    - by Christopher Done
    I've got a quad core Intel processor. I've got a big file split into little ones as RAR files, foo.r00, foo.r01, etc. which the RAR program extracts into one file/directory. Is there a RAR program that I can specify like "use four cores" in the extract process? At the moment it sits there using 100% of one core. I recognise the bottleneck might be my hard drive anyway, but I don't see a lot of HD usage and suspect the decompression process is more intensive than waiting on I/O. For example, GNU Make accepts a (-j, I think) argument to tell it how many cores to use, which I used to compile PHP 6 really quickly.

    Read the article

  • Uninstalled Ubuntu, no GRLDR?

    - by user32965
    So I'm a big fat idiot. I installed Ubuntu 11.04 on my school's laptop, and here's come the time that I have to turn it back in. I wrote GRUB to the Master Boot Record, thinking it wasn't going to be permanent. So, fast forward to yesterday. I decided to hell with this, and popped in my Windows 7 CD, deleted the whole partition, formatted to NTFS, and installed Windows 7 on it. I'm surfing the web and my computer overheats [totally typical] I boot up, and get this: Try (hd0,0): FAT32: No GRLDR Try (hd0,1): invalid or null Try (hd0,2): invalid or null Try (hd0,3): invalid or null Try (hd1,0): NTFS5: No grldr Try (hd1,1): invalid or null Try (hd1,2): invalid or null Try (hd1,3): invalid or null Cannot find GRLDR. Press space bar to hold the screen, any other key to boot previous MBR... Timeout: 5 The timeout part just counts down to 0 from 5. I need to turn in this thing before tomorrow, please please please can someone help me out?

    Read the article

  • Apple TV photo fail

    - by Tony
    I just bought the new (2nd gen) Apple TV. Everything works beautifully, except for Photos. I have 25,000 pictures on my computer, which creates 3 issues with Apple TV 1) It takes roughly an HOUR to load my photos, every time! (if I navigate anywhere else, its has to reload all over again next time) 2) It condenses all of my folders-sub-folders-sub-sub-folders into just the top folder. Having only 5 top folders with 5,000 pictures each is pretty much useless. 3) It has an artificial cap of 20,000 pictures, after which it won’t load more. The first two are the big ones, since they pretty much make the product unusable. I have called apple support (they just said “sorry…too bad”), and have checked all the online forums I can. Also it is definitely not a connection issue, as it streams HD Netflix movies with ease. Does anyone else here have this issue, and/or hopefully some solutions?

    Read the article

  • timeout with apache & php w/ each virtual host has his own user process

    - by acemtp
    I have 10 unix users in /home/. Each user is for a specific subdomain for example user www in /home/www/public_html is for www.mywebsite. blog in /home/blog/public_html is for blog.mywebsite. 90% is php and 10% ror for the moment i use apache + fastcgi that use SuexecUserGroup to setup the process with the good user. it seems to works but i have a strange behavior where after a few hours/days, the server stop answering (timeout) but the cpu load is still very low (it's a big server), the apache status display lot of "W" Sending Reply states but there's still 50 idle workers so it should be able to answer. in the older server (lot of slower) we add only one user and using mod_php and we never had this issue. is there another way to do that without fastcgi and SuexecUserGroup or do you know what's going wrong?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >