Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 274/1640 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • org.apache.cxf.interceptor.Fault: Unmarshalling Error: Duplicate default namespace declaration.

    - by JohnC
    Not sure why I am receiving this after the webservice ran and I am trying to return back to my client side bean. The webservice works perfectly outside of my webserver in SoapUI. org.apache.cxf.interceptor.Fault: Unmarshalling Error: Duplicate default namespace declaration. at [row,col {unknown-source}]: [1,321] at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:764) at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:623) at org.apache.cxf.jaxb.io.DataReaderImpl.read(DataReaderImpl.java:128) at org.apache.cxf.interceptor.DocLiteralInInterceptor.handleMessage(DocLiteralInInterceptor.java:101) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:671) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2177) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:2057) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1982) at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:66) at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:637) at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:483) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:309) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:261) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73) at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:124)

    Read the article

  • SQL: How to select rows from a table while ignoring the duplicate field values?

    - by Maxxon
    How to select rows from a table while ignoring the duplicate field values? Here is an example: id user_id message 1 Adam "Adam is here." 2 Peter "Hi there this is Peter." 3 Peter "I am getting sick." 4 Josh "Oh, snap. I'm on a boat!" 5 Tom "This show is great." 6 Laura "Textmate rocks." What i want to achive is to select the recently active users from my db. Let's say i want to select the 5 recently active users. The problem is, that the following script selects Peter twice. mysql_query("SELECT * FROM messages ORDER BY id DESC LIMIT 5 "); What i want is to skip the row when it gets again to Peter, and select the next result, in our case Adam. So i don't want to show my visitors that the recently active users were Laura, Tom, Josh, Peter, and Peter again. That does not make any sense, instead i want to show them this way: Laura, Tom, Josh, Peter, (skipping Peter) and Adam. Is there an SQL command i can use for this problem?

    Read the article

  • How to delete duplicate vectors within a multidimensional vector?

    - by David
    I have a vector of vectors: vector< vector<int> > BigVec; It contains an arbitrary number of vectors, each of an arbitrary size. I want to delete not duplicate elements of each vector, but any vectors that are the exact same as another. I don't need to preserve the order of the vectors so I can sort etc.. It should be a really simple problem to solve but I'm new to this, my (not-working) best effort: for (int i = 0; i < BigVec.size(); i++) { for (int j = 1; j < BigVec.size() ; j++ ) { if (BigVec[i][0] == BigVec [j][i]); { BigVec.erase(BigVec.begin() + j); i = 0; // because i get the impression deleting a j = 1; // vector messes up a simple iteration through } } } I think there might be a solution using Unique(), but I can't get that to work either.

    Read the article

  • How can I remove an "ALMOST" Duplicate using LINQ? ( OR SQL? )

    - by Atomiton
    This should be and easy one for the LINQ gurus out there. I'm doing a complex Query using UNIONS and CONTAINSTABLE in my database to return ranked results to my application. I'm getting duplicates in my returned data. This is expected. I'm using CONTAINSTABLE and CONTAINS to get all the results I need. CONTAINSTABLE is ranked by SQL and CONTAINS (which is run only on the Keywords field ) is hard-code-ranked by me. ( Sorry if that doesn't make sense ) Anyway, because the tuples aren't identical ( their rank is different ) a duplicate is returned. I figure the best way to deal with this is use LINQ. I know I'll be using the Distinct() extension method, but do I have to implement the IEqualityComparer interface? I'm a little fuzzy on how to do this. For argument's sake, say my resultset is structured like this class: class Content { ContentID int //KEY Rank int Description String } If I have a List<Content> how would I write the Distinct() method to exclude Rank? Ideally I'd like to keep the Content's highest Rank. SO, if one Content's RAnk is 112 and the other is 76. I'd like to keep the 112 rank. Hopefully I've given enough information.

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • Rails: Duplicate functionality across controllers? A humble plea.

    - by Alex
    So I'm working with authlogic, and I'm trying to duplicate the login functionality to the welcome page, so that you can log in by restful url or by just going to the main page. No, I don't know if we'll keep that feature, but I want to test it out anyway. Here's the error message: RuntimeError in Welcome#index Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id The code is below. Basically, what's happening is the index view (the first code snippet) is sending the information from the form to the create method of user_sessions controller. At this point, in theory, it create should just pick up, but it doesn't. PLEASE help. Please. I've been doing this for about 8 hours. I checked Google. I checked IRC. I checked every book I could find. You don't even have to answer, I can to the grunt work if you just point me in the right direction. <% form_for @user_session, :url => user_sessions_path do |f| %> <%= f.text_field :email %><br /> <%= f.password_field :password %> <%= submit_tag 'Login' %> <% end %> class ApplicationController < ActionController::Base helper :all # include all helpers, all the time protect_from_forgery # See ActionController::RequestForgeryProtection for details # Scrub sensitive parameters from your log # filter_parameter_logging :password helper_method :current_user_session, :current_user before_filter :new_session_object protected def new_session_object unless current_user @user_session = UserSession.new(params[:user_session]) end end private def current_user_session return @current_user_session if defined?(@current_user_session) @current_user_session = UserSession.find end def current_user return @current_user if defined?(@current_user) @current_user = current_user_session && current_user_session.record end end

    Read the article

  • How duplicate an object in a list and update property of duplicated objects ?

    - by user359706
    Hello What would be the best way to duplicate an object placed in a list of items and change a property of duplicated objects ? I thought proceed in the following manner: - get object in the list by "ref" + "article" - Cloned the found object as many times as desired (n times) - Remove the object found - Add the clones in the list What do you think? A concrete example: Private List<Product> listProduct; listProduct= new List<Product>(); Product objProduit_1 = new Produit; objProduct_1.ref = "001"; objProduct_1.article = "G900"; objProduct_1.quantity = 30; listProducts.Add(objProduct_1); ProductobjProduit_2 = new Product; objProduct_2.ref = "002"; objProduct_2.article = "G900"; objProduct_2.quantity = 35; listProduits.Add(objProduct_2); desired method: public void updateProductsList(List<Product> paramListProducts,Produit objProductToUpdate, int32 nbrDuplication, int32 newQuantity){ ... } Calling method example: updateProductsList(listProducts,objProduct_1,2,15); Waiting result: Replace follow object : ref = "001"; article = "G900"; quantite = 30; By: ref = "001"; article = "G900"; quantite = 15; ref = "001"; article = "G900"; quantite = 15; The Algorithm is correct? Would you have an idea of the method implementation "updateProductsList" Thank you in advance for your help.

    Read the article

  • What is the syntax for Dsynchronize "exclude filter" for files 's full path to exclude bin\* and obj\* of a C# solution?

    - by Nam G. VU
    Dsynchronize is a great free tool to sync two folders. I'm using it to sync two solutions checked out from two different TFS Team Collection. I want to exclude the following: All files in bin folder All files in obj folder I tried bin\*; obj\* but it doesn't work. How can I do that? ps. Though, trying *.g.* and *cache* help to exclude the files whose names match with the filter. It seems the filter is applied to the file name only NOT the full path of the file

    Read the article

  • How to copy lots of files between two computers, without network?

    - by Steve Bennett
    I want to copy around 50Gb of files from my desktop to my work laptop. For some reason, the laptop won't connect to my home network. I haven't had any luck with a direct ethernet connection either, and I'm not willing to change any of the laptop's network configuration (last time I did that, I couldn't get onto the network at work, making me Not Very Popular). So...what else is there? The obvious route is copying via SD card. My largest card is 8Gb. But I can't find a good workflow. Is there a tool designed for this, where I could just repetitively move the card back and forth, without having to select files? I've tried using teracopy, but you end up missing a few files. I guess I could zip everything up into multi-volume .rars or something...but is there a more elegant way?

    Read the article

  • How can I set audit controls on files owned by TrustedInstaller using Powershell?

    - by Drise
    I am trying to set audit controls on a number of files (listed in ACLsWin.txt) located in \%Windows%\System32 (for example, aaclient.dll) using the following Powershell script: $FileList = Get-Content ".\ACLsWin.txt" $ACL = New-Object System.Security.AccessControl.FileSecurity $AccessRule = New-Object System.Security.AccessControl.FileSystemAuditRule("Everyone", "Delete", "Failure") $ACL.AddAuditRule($AccessRule) foreach($File in $FileList) { Write-Host "Changing audit on $File" $ACL | Set-Acl $File } Whenever I run the script, I get the error PermissionDenied [Set-Acl] UnauthorizedAccessException. This seems to come from the fact that the owner of these files is TrustedInstaller. I am running these scripts as Administrator (even though I'm on the the built-in Administrator account) and it's still failing. I can set these audit controls by hand using the Security tab, but there are at least 200 files for which doing by hand may lead to human errors. How can I get around TrustedInstaller and set these audit controls using Powershell?

    Read the article

  • Windows 7 won't recognize backup set can I script extracting the files in some other way?

    - by datatoo
    The Windows 7 Backup/Restore created multiple backup sets and I was able to restore the oldest version, but not the most recent, which is not seen by the application. I do see all of the zip files and there are hundreds in later versions. Is there a way to extract each of these correctly outside of the regular restoration method? Perhaps scripting an extract of each day one after another? further clarifying The backup files were all made to an external drive. The original computer died completely, power supply, drives everything. I am trying to reconstruct as much as possible and the only backup set recognized is 6 months older. This was recovered over a new install, but unzipping thousands of zip files is not really a simple unzip copy project as the original paths are not a simple thing to reconstruct.

    Read the article

  • Fastest way to move files from a guest VM to the host?

    - by iTayb
    Hey there. I'm looking for the fastest way to copy files from a VM to physical servers. Setting up a network between them isn't a thing I'd like to do. I believe it is much more secure when not having one. VMware suggests using the Copy-VMGuestFile cmdlet from their PowerCLI interface, however I find it slow (Running at approximately 1.5MB/s). I thought of the following: Creating a new virtual hard drive, moving the files in, and download the .vmdk file from the server, then extracting it locally. It is possible, however will not work with working VMs, and I don't want to shut-down the VM every time I want to move files. Use the virtual floppy device and download the .flp file. It works even if the VM is running, but it is limited to 2.8MB. Do I have any other way? I'm using ESXi 4.1. Thanks.

    Read the article

  • How to configure Linux to open files by extension?

    - by Gregory MOUSSAT
    The various Linux's desktops open files according to their mime type. This is a very nice feature but I also need to open them by extension (as with Windows). For instance, I want to open every xxxxx.vnc files with a specific program when I double-click on them. I use xfce but I don't think it differs from Gnome or KDE because all of them use the same configuration files (defaults.list and mimeapps.list). If possible the settings are user specific, not system wide. I've found some very poor informations about that, and all are system wide, so may be wiped out by some updates.

    Read the article

  • Windows 7 Explorer keyboard shortcut: set focus to files/folders/content area?

    - by Pup
    Is there a Windows 7 Explorer keyboard shortcut to set focus to files/folders/content area (depicted below)? This has bothered me for so long... I want to set my explorer window's focus to the files pane (shown below). What's the most efficient way to do that with a keyboard? Here's what I've been doing: - Tab / Shift+Tab to move focus through interactive window elements until it looks like a selection rectangle appears over one of the files in my window. - Alt+V, Alt+D to change appearance setting of a folder contents' icons. Doesn't always work, depending on what's selected at the time.

    Read the article

  • If using eMule, how to keep current downloading files while adding a hard drive?

    - by the searcher
    If there are still downloading files (ones that will need extra 2 week or unknown time because they are rare files) but need to use a new hard drive because no space is left in hard drive, then is there a way to use new hard drive while keeping existing downloads ongoing? That's because if we change the folder in eMule from G: to H:, then all existing downloads will disappear too... Update: I can move the completed files over to the new hard drive... but it is going to be a never ending task... (old hard drive gets full... move some... and repeat)

    Read the article

  • Transferring files from computer to Android Simulator SD Card ?

    - by mgpyone
    I've tried Android Simulator for Mac and can use it well. also I've set 100 MB for SD Storage for that simulator. however, I don't found a way of transferring files from my Mac to that Android Simulator SD Storage. Current solution is I've to send files to my mail and have to access via Simulator ,then download to it . well, but it's not available fro all formats . something like image file(.img) are not allowed to download to the simulator. I've seek any folder of SD Card for Simulator within Android Folder I've extracted. I've found nothing. I want to transfer files from my HD to Android simulator SD card storage. Thus, is there any effective solution that support my idea ? I'm on Mac OS X 10.6.2.

    Read the article

  • what kind of RAID should I choose when planning to host a vedio stream application? [duplicate]

    - by facebook-100005613813158
    This question already has an answer here: What are the different widely used RAID levels and when should I consider them? 2 answers Which RAID level should you recommend for a company that plans to hosts a video streaming application?we get 4 candidate ,RAID1 , RAID3,RAID5 AND RAID6. Which one is the best? In my opinion ,a video streaming application doesn't have a very strict demand for data correctness, so , just RAID1 is ok?But on the other hand , RAID1 seems very capacty-consuming?

    Read the article

  • How to rename files in a folder using the ls command output as a pipe ?

    - by user1179459
    I am using GNU/Linux and BASH shell, What i wanted to do is in server is to i need to be able to download the files stating with B* and D* and then rename them to ~B* and ~D*(same file name just ~ in-front) i wrote following which works fine for the downloading part ideally i would like it to use ls command output as well but dont know how to do that. cd inbox get D* get B* ls B*|rename $0 ~B.* bye Any idea ? ideally what i would like to do is ls command to send the list of files one by one to the get command and then the once the get command is completed i want rename command executed renaming the server files

    Read the article

  • File History - Unable to scan user libraries for changes and perform backup of modified files for configuration

    - by azl
    When trying to run the File History tool in Windows 8 it runs for about 2 seconds then stops. No files are backed up to the selected drive. In the event viewer the only error that appears is: Unable to scan user libraries for changes and perform backup of modified files for configuration C:\Users\win8User\AppData\Local\Microsoft\Windows\FileHistory\Configuration\Config I've tried deleting both the configuration files and the FileHistory directory on the target drive. Setting up File History again results in the same error. Is there a better way to track down what is causing the failure? Or somehow get the File History tool to create a more verbose log file that shows what is causing the problem?

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Forgot to unmount/eject external hdd, lost moved files. OSX

    - by balupton
    So I was using my mac with my external hard drive connected via USB. I moved about 10gigs of data to it (via drag and drop while holding command to move the files rather than to copy them). They moved to the drive alright, but as I was having some issues and finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hdd back in) it no longer had any of the files which I moved onto it. How can I recover these files, as it was a lot of data! Cheers.

    Read the article

  • Can I easily use a VPN to duplicate SSH Tunneling functionality?

    - by Steve V.
    Right now, when I want to use an unsecured wireless connection with my (Linux) laptop, I secure my connection using a variation of the method provided here. However, to the best of my knowledge, the (non-jailbroken) iPad does not allow applications to tunnel traffic through local ports. However, it does seem to allow certain VPN traffic. I have never set up, or even used, a VPN before. I'm looking for confirmation that I'm not barking up the wrong tree before I invest significant effort into setting up my own VPN server. If I want to secure my wireless iPad traffic over an unsecure wireless connection, would I be on the right track by looking at a VPN?

    Read the article

  • Does multiple files in SQL Server when using RAID help reduce conflicts in growth and file-locking?

    - by Dr Giles M
    I've been reading around and get the impression that if you are using RAID then using multiple SQL Server files within a filegroup won't yeild any more improvements, and the benefits are purely administrative (if you started to run out of space or wanted to partition off data into managable chunks for backups/balancing the data around your big server room). However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller databases that SQL Server will perform growth and locking operations (for writes) on a LOGICAL file basis, so even if you are using RAID, it seems to make sense to have multiple files in a file group to balance I/O, or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >