Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 33/201 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Rails: three most recent comments with unique users

    - by Dennis Collective
    what would I put in the named scope :by_unique_users so that I can do Comment.recent.by_unique_users.limit(3), and only get one comment per user? class User has_many :comments end class Comment belongs_to :user named_scope :recent, :order => 'comments.created_at DESC' named_scope :limit, lambda { |limit| {:limit => limit}} named_scope :by_unique_users end on sqlite named_scope :by_unique_user, :group = "user_id" works, but makes it freak out on postgres, which is deployed on production PGError: ERROR: column "comments.id" must appear in the GROUP BY clause or be used in an aggregate function

    Read the article

  • rails named_scope ignores eager loading

    - by Craig
    Two models (Rails 2.3.8): User; username & disabled properties; User has_one :profile Profile; full_name & hidden properties I am trying to create a named_scope that eliminate the disabled=1 and hidden=1 User-Profiles. The User model is usually used in conjunction with the Profile model, so I attempt to eager-load the Profile model (:include = :profile). I created a named_scope on the User model called 'visible': named_scope :visible, { :joins => "INNER JOIN profiles ON users.id=profiles.user_id", :conditions => ["users.disabled = ? AND profiles.hidden = ?", false, false] } I've noticed that when I use the named_scope in a query, the eager-loading instruction is ignored. Variation 1 - User model only: # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <% end %> # generates a single query: SELECT * FROM `users` Variation 2 - use Profile model in view; lazy load Profile model # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <p><%= user.profile.full_name %></p> <% end %> # generates multiple queries: SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SHOW FIELDS FROM `profiles` SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 5) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 6) ORDER BY full_name ASC LIMIT 1 Variation 3 - eager load Profile model # UserController @users = User.find(:all, :include => :profile) #view; no changes # two queries SELECT * FROM `users` SELECT `profiles`.* FROM `profiles` WHERE (`profiles`.user_id IN (1,2,3,4,5,6)) Variation 4 - use name_scope, including eager-loading instruction #UserConroller @users = User.visible(:include => :profile) #view; no changes # generates multiple queries SELECT `users`.* FROM `users` INNER JOIN profiles ON users.id=profiles.user_id WHERE (users.disabled = 0 AND profiles.hidden = 0) SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 Variation 4 does return the correct number of records, but also appears to be ignoring the eager-loading instruction. Is this an issue with cross-model named scopes? Perhaps I'm not using it correctly. Is this sort of situation handled better by Rails 3?

    Read the article

  • Rails: three most recent comments for unique users

    - by Dennis Collective
    class User has_many :comments end class Comment belongs_to :user named_scope :recent, :order => 'comments.created_at DESC' named_scope :limit, lambda { |limit| {:limit => limit}} named_scope :by_unique_users end what would I put in the :by_unique_users so that I can do Comment.recent.by_unique_users.limit(3), and only get one comment per user

    Read the article

  • how to create a dynamic sql statement w/ python and mysqldb

    - by Elias Bachaalany
    I have the following code: def sql_exec(self, sql_stmt, args = tuple()): """ Executes an SQL statement and returns a cursor. An SQL exception might be raised on error @return: SQL cursor object """ cursor = self.conn.cursor() if self.__debug_sql: try: print "sql_exec: " % (sql_stmt % args) except: print "sql_exec: " % sql_stmt cursor.execute(sql_stmt, args) return cursor def test(self, limit = 0): result = sql_exec(""" SELECT * FROM table """ + ("LIMIT %s" if limit else ""), (limit, )) while True: row = result.fetchone() if not row: break print row result.close() How can I nicely write test() so it works with or without 'limit' without having to write two queries?

    Read the article

  • Rails: three most recent records by unique belongs_to associated record

    - by Dennis Collective
    class User has_many :comments end class Comment belongs_to :user named_scope :recent, :order => 'comments.created_at DESC' named_scope :limit, lambda { |limit| {:limit => limit}} named_scope :by_unique_users end what would I put in the :by_unique_users so that I can do Comment.recent.by_unique_users.limit(3), and only get one comment per user on sqlite named_scope :by_unique_user, :group = "user_id" works, but makes it freak out on postgres, which is deployed on production PGError: ERROR: column "comments.id" must appear in the GROUP BY clause or be used in an aggregate function

    Read the article

  • How can I create an “su” only user (no SSH or SFTP) and limit who can “su” into that account in RHEL5? [closed]

    - by Beaming Mel-Bin
    Possible Duplicate: How can I allow one user to su to another without allowing root access? We have a user account that our DBAs use (oracle). I do not want to set a password on this account and want to only allow users in the dba group to su - oracle. How can I accomplish this? I was thinking of just giving them sudo access to the su - oracle command. However, I wouldn't be surprised if there was a more polished/elegant/secure way.

    Read the article

  • How do I limit one connection per user for L2TP/IPSec using OpenSwan?

    - by Han
    I've successfully set up a VPN server with openswan, pppd, and xl2tpd on Ubuntu. Everything works great, but I'm having trouble finding out how to only allow one VPN connection per user listed in the /etc/ppp/chap-secrets file? Right now a user can have unlimited connections which is worrisome to me as I've shared access to the VPN with some friends but am worried they might keep spreading the username/password.

    Read the article

  • Apache has a 2GB file limit on a CIFS network drive?

    - by netvope
    Setup: A Windows and a Ubuntu Server are hosted in VMware ESXi I have a a 6GB file on a Windows share The Windows share is mount on Ubuntu with smbmount A symlink pointing to the 6GB file is created in a public_html directory, which is readable by Apache The problem: wget gets an error Connection closed at byte 2130706432. Retrying. after downloading 2130706432 bytes (exactly 2032 MiB, and is the same every time) Apache returns 206 Partial Content without showing any errors in the log Same error even if I download from localhost Similar error when Firefox is used instead of wget No error if I md5sum or cp the file on Ubuntu, suggesting that smbmount and the Windows Server are OK with 6GB files. No error if Apache serve a 6GB file from the local disk, suggesting that Apache has no problems handling 6GB files. Any ideas why Apache/symlink/smbmount/Windows would cause an error when used together? How can I fix the problem? Software used: VMware ESXi 4 Update 1 Windows Server 2008 R2 Ubuntu 8.04 Server, vmxnet3 Apache 2.2.8 mount.cifs 1.10-3.0.28a

    Read the article

  • Limit which processes a user can restart with supervisor?

    - by dvcolgan
    I have used supervisor to manage a Gunicorn process running a Django site, though this question could pertain to anything being managed by supervisor. Previously I was the only person managing and using our server, and supervisor just ran as root and I would use sudo to run supervisorctl restart myapp when needed. Now our server has to support multiple users working on different sites, and each project needs to be able to restart their own gunicorn processes without being able to restart other users' processes. I followed this blog post: http://drumcoder.co.uk/blog/2010/nov/24/running-supervisorctl-non-root/ and was able to allow non-root users to use supervisorctl, but now anyone can restart anyone else's processes. From the looks of it, supervisor doesn't have a way of doing per-user access control. Anyone have any ideas on how to allow users to restart only their own processes without root?

    Read the article

  • Does servers to >$1000 really have a memory limit of 4GB?

    - by Sandra
    Hi, I am looking for a server, and when I look at the specs some of the servers can only handle 4GB. Other can handle 8GB or 16GB, and others 64GB. Can that really be true? Is this really a hardware limitation, or are they disabling it in the BIOS, so there is no way to use 16GB on a 4GB supported server? An example is the Dell PowerEdge SC 440. Only 4GB supported, they say. Would Linux allow me to use 16GB on a 4GB server? Sandra

    Read the article

  • Is there any USB2.0 data transfer chunk size limit?

    - by goldenmean
    With one read() or write() at a time, can we increase the bulk data size over USB interface? For example, I want to transfer chunk of 1024 (1K) bytes data and if the device has limitations of only 64bytes, is there any way I can increase the packet size for read() and write() system call over USB? Is there any limitation on size of data transfer over USB in a host-device environment?

    Read the article

  • SharePoint 2007 / 2010 Content Indexing &ldquo;The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.&rdquo;

    - by Stacy Vicknair
    If you have large files in a content source that is being indexed by Sharepoint you might run into the following error message: “The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.” This is usually caused because SharePoint’s MaxDownloadSize setting is set lower than the size of the file you are attempting to index. You can increase this value, restart the service then kick off a full crawl in order to fix this issue, but SharePoint 2007 and 2010 have different methods for accomplishing this task.   Sharepoint 2007 Open up the Registry editor and increase the MaxDownloadSize value to a number (in MB) higher than the largest file being indexed. You can find this at: HKEY_LOCAL_MACHINE\Software\Microsoft\Search\1.0\Gathering Manager After you increase the size, cycle the search service and kick off a full crawl of the content source in question.   Sharepoint 2010 With SharePoint 2010 you can use PowerShell via the Sharepoint 2010 Console in order to change the MaxDownloadSize. Execute the following commands to update the value: 1: $ssa = Get-SPEnterpriseSearchServiceApplication 2: $ssa.SetProperty(“MaxDownloadSize”, <new size in MB>) 3: $ssa.Update()   References: http://support.microsoft.com/kb/287231 http://blogs.technet.com/b/brent/archive/2010/07/19/sharepoint-server-2010-maxdownloadsize-and-maxgrowfactor.aspx   Technorati Tags: SharePoint,WSS,MaxDownloadSize,Search

    Read the article

  • How to test KVM guest CPU maximum allocation limit?

    - by Ace
    Running Ubuntu 13.04 Host and vm Guest. Using virtio for hdd, nic. Max-allocaion CPU cores is 6, minimum is 2. Ive made a vm with virt-manager just to play with, and to test out kvm. Alright, I have a decent understand how the memory balloon driver works, but I still dont know how to test if the guest OS can utilize the max setting for cpu cores. From what i gather, the host will start one thread of qemu for each core allocated per vm. When i run htop inside the guest, it only shows two cores. (also here is the output of cat /proc/cpuinfo: https://gist.github.com/anonymous/93a361545130923537da ) How can I "force" the guest to allocate the other 4 cores so that it can show 6 cores in htop? Is there a way to do this?

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • Do eSATA HDD docking stations have a capacity limit?

    - by Michael Kjörling
    I'm looking at perhaps buying an eSATA docking station to be able to easily plug in and unplug hard disk drives, particularly but not necessarily only for backup purposes. Note: This is not a hardware shopping recommendation question. Please don't vote to close it as such. Looking at different models, I find for example this page detailing the Deltaco SI-7908SUS which specifically states "storage capacity: 1.5 TB" as well as "pictured hard disk not included, only for illustration". A customer review specifically mentions that it does not work with 3 TB drives, although does not go into any detail such as OS, drive model, etc. From a brief glance, the vendor's web site does not appear to say either way. Then there is the quite similar Deltaco SI-7908B3 which boasts on the box "all 2.5" and 3.5" HDD/SSD compatible". My question is: Why would what basically amounts to a SATA/eSATA adapter have any say in what storage capacity devices are supported? Does it? Assuming the OS supports the full capacity of the drive, why should introducing another (not even a different, really) connector change anything? Bonus question: Might it make a difference if the docking station exposes multiple interfaces (such as in the case of for example the SI-7908SUS exposing USB 2.0 and eSATA)? (I still think it shouldn't, but it'd be nice to have it confirmed.)

    Read the article

  • How do I limit the users a specific user can run commands in linux?

    - by user8571
    I have 2 user accounts, foo and bar I want to allow user foo to execute commands as root and any other user ie: sudo su root -c'./run-my-script' sudo su bar -c'./another-script' sudo su another -c'./yet-another-script I also want to allow user bar to execute commands as other user but only a subset and not root ie: sudo su bar -c'./run-my-script' but not sudo su root -c'./run-my-script' Is this possible ?

    Read the article

  • iptables 1.4 and passive FTP on custom port

    - by Cracky
    after the upgrade from debian squeeze to wheezy I've got a problem with passive FTP connection. I could narrow it to be iptables related, as I could connect via FTP w/o problems after adding my IP to the iptables ACCEPT rule. Before the upgrade I was able just to do modprobe nf_conntract_ftp ports=21332 and adding iptables -A THRU -p tcp --dport 21332 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT now..it doesn't help anymore. The INPUT rule is being triggered as I can see in the counter, but the directory listing is the last thing it does. Setting up a passive-port range is the last thing I want to do, I dislike open ports. I also tried the trick with helper mod by adding following rule before the actual rule for 21332 iptables -A THRU -p tcp -i eth0 --dport 21332 -m state --state NEW -m helper --helper ftp-21332 -j ACCEPT but it doesn't help and is even not being triggered according to counter. The rule in the next line (w/o helper) is being triggered.. here some info: # iptables --version iptables v1.4.14 # lsmod |grep nf_ nf_nat_ftp 12460 0 nf_nat 18242 1 nf_nat_ftp nf_conntrack_ftp 12605 1 nf_nat_ftp nf_conntrack_ipv4 14078 32 nf_nat nf_defrag_ipv4 12483 1 nf_conntrack_ipv4 nf_conntrack 52720 7 xt_state,nf_conntrack_ipv4,xt_conntrack,nf_conntrack_ftp,nf_nat,nf_nat_ftp,xt_helper # uname -a Linux loki 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux # iptables-save # Generated by iptables-save v1.4.14 on Sun Jun 30 03:54:28 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :BLACKLIST - [0:0] :LOGDROP - [0:0] :SPAM - [0:0] :THRU - [0:0] :WEB - [0:0] :fail2ban-dovecot-pop3imap - [0:0] :fail2ban-pureftpd - [0:0] :fail2ban-ssh - [0:0] -A INPUT -p tcp -m multiport --dports 110,995,143,993 -j fail2ban-dovecot-pop3imap -A INPUT -p tcp -m multiport --dports 21,21332 -j fail2ban-pureftpd -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh -A INPUT -p tcp -m multiport --dports 110,995,143,993 -j fail2ban-dovecot-pop3imap -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,ACK FIN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags ACK,URG URG -j DROP -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -j BLACKLIST -A INPUT -j THRU -A INPUT -j LOGDROP -A OUTPUT -j ACCEPT -A OUTPUT -s 93.223.38.223/32 -j ACCEPT -A BLACKLIST -s 38.113.165.0/24 -j LOGDROP -A BLACKLIST -s 202.177.216.0/24 -j LOGDROP -A BLACKLIST -s 130.117.190.0/24 -j LOGDROP -A BLACKLIST -s 117.79.92.0/24 -j LOGDROP -A BLACKLIST -s 72.47.228.0/24 -j LOGDROP -A BLACKLIST -s 195.200.70.0/24 -j LOGDROP -A BLACKLIST -s 195.200.71.0/24 -j LOGDROP -A LOGDROP -m limit --limit 5/sec -j LOG --log-prefix drop_packet_ --log-level 7 -A LOGDROP -p tcp -m tcp --dport 25 -m limit --limit 2/sec -j LOG --log-prefix spam_blacklist --log-level 7 -A LOGDROP -p tcp -m tcp --dport 80 -m limit --limit 2/sec -j LOG --log-prefix web_blacklist --log-level 7 -A LOGDROP -p tcp -m tcp --dport 22 -m limit --limit 2/sec -j LOG --log-prefix ssh_blacklist --log-level 7 -A LOGDROP -j REJECT --reject-with icmp-host-prohibited -A THRU -p icmp -m limit --limit 1/sec -m icmp --icmp-type 8 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 25 -j ACCEPT -A THRU -i eth0 -p udp -m udp --dport 53 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 110 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 143 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 465 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 585 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 993 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 995 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 2008 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 10011 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 21332 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 30033 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A fail2ban-dovecot-pop3imap -j RETURN -A fail2ban-dovecot-pop3imap -j RETURN -A fail2ban-pureftpd -j RETURN -A fail2ban-pureftpd -j RETURN -A fail2ban-ssh -j RETURN -A fail2ban-ssh -j RETURN COMMIT # Completed on Sun Jun 30 03:54:28 2013 So, as I said, I have no problems with connecting when adding my IP to go through..but that's not a solution as noone except me can connect anymore~ If someone got an idea what the problem is, please help me! Thanks Cracky

    Read the article

  • Does Apache 2.2 (windows) have any default bandwidth limit?

    - by igino manfre'
    I'm running Apache on a server in cloud (Windows server 2008 R2 on VMware, 1 Gbps of BW, http://95.110.164.61 ). I'm streaming many live DVB MPEG Transport Stream, precompressed in loop, (not flash) generated by VLC on port 640xx and then reverse proxied by Apache on port 80. The server's firewall is open for VLC and Apache on all ports. Above 1.5 Mbps the reproduction is affected by continous stop & go. Please note that if you request a stream generated by VLC directly at http://95.110.164.61:64087/mpg2_6.4 you see a correct stream, while if you request http://95.110.164.61/mpg2_6.4 you do not. I know that Flash streaming Server uses Apache to stream on port 80 (and it works). I'm not an expert with Apache, can anyone tell me if any "special" module is required to increase the bandwidth?

    Read the article

  • In Windows XP Professional, is there a limit on the number of files that can be contained in a single folder? [duplicate]

    - by Andrew
    This question already has an answer here: How many files can a windows folder contain? 1 answer I am running Windows XP Professional, service pack 3. Right now I have 4,398 files in a single folder, and Windows XP seems to read it fine. How many more files can I place in this same folder, either theoretically or practically? Thanks for your time.

    Read the article

  • Is there a limit of max. number of files in external hard drive folder?

    - by tfs
    I have a FAT32 external hard drive where I keep backups downloaded from webserver. I have a directory with 30 subdirectories. One of the subdirectories contains 21381 files and when I try to copy more files into this directory I get 0x80070052 error. However,it's possible to copy one more file in this directory (only one) if I make it's name shorter (8 characters instead of 22 as it's original name). How do I solve this problem? Now I can not synchronize external hard disk files with server files which is very important for me.

    Read the article

  • Is data transfer response related to cable bandwidth limit?

    - by John Paku
    Hello, Before this, I'm using shared 100Mbps bandwidth. Its fast enough. And now, the server running dedicated 10Mbps bandwidth. When running 10Mbps, it takes more time to completely load the same page. The server bandwidth usage is small, with average less than 5Mbps. (I can see some website hosted at same data center loads very fast.)

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >