Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 88/670 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Limit of dvd rom

    - by user23950
    I have lite on dvd rom. And I'm going to copy lots of files from maybe 40-70 dvd's. And I'm using this dvd rom for about 4-7 months now. And I have also burned lots of dvd's. and I've copied the 2nd dvd. But the rom is making sound. A sound that I do not hear frequently. Does it depend on the dvd's that I'm reading or my dvd rom is getting old. How many dvd's do you think the rom can copy without threatening its health

    Read the article

  • Why Hekaton In-Memory OLTP Truly is Revolutionary

    - by merrillaldrich
    I just returned from the PASS Summit in Charlotte, NC – which was excellent, among the best I have attended – and I have had Dr. David DeWitt’s talk rolling around in my head since he gave it on Thursday. (Dr. DeWitt starts at 27:00 at that link.) I probably cannot do it justice, but I wanted to recap why Hekaton really is revolutionary, and not just a marketing buzzword. I am normally skeptical of product announcements, and I find too often that real technical innovation can be overwhelmed by the...(read more)

    Read the article

  • Learning about BIOS memory, instructions and code origins

    - by m3taspl0it
    I'm learning about the BIOS and have a few questions. What is meant by, "This is the last 16 bytes of memory at the end of the first megabyte of memory"? The first instruction of BIOS is jump, which jumps to the main BIOS program, but where does it jump? Where does the original BIOS code originate? I'm also interested in POST? How are POST signals executed by the processor?

    Read the article

  • Removing files on a limit access backup server

    - by Bart van Heukelom
    I have an account on a backup server but it's full, so I need to clear it. The problem is that It's only accessible via FTP, SFTP and Rsync (no shell) Deleting lots of small files (as in, multiple full Linux installations), which I have to do, is undoable over FTP/SFTP because it cannot recursively delete directories in one command (Yes, most clients will fake this by issueing all the seperate commands for you but the overhead is huge and the process takes several days...well it crashes before that). What do I do?

    Read the article

  • Increasing Screen Resolution to more than limit and scaling to origional

    - by Akshat Mittal
    I have a laptop, with Screen Resolution - 1366x768 (most common) - I want to increase it further to 1600x900 (or higher), in the same ratio. I want to scale the higher resolution on my current screen to fit it. I found xrandr with command xrandr --output LVDS1 --scale 1.4x1.4, this worked but again resulted to another problem, it does the scaling thing but the cursor is still blocked into the native screen resolution and I am not able to move it further, I found that the bug is already filed here. Also this command was only for Linux, I wanted to do this thing with both Linux and Windows (including Windows 8). I want a similar software that is bug free (at least not a major bug like this) and that supports Windows as well (or two separate software for Windows and Linux). Any help is appreciated and Thanks in advance.

    Read the article

  • Setting charging limit on MBP?

    - by Giannis
    Is it possible to tweak settings so that the battery will only charge up to 90% -95% and then use power adapter to run ? A friend told me he was able to set such limits on his non-mac PC so I was hoping there would be a way to do it. From reading I have done online it appears this is better than having a 100% battery most of the time. I have an MBP 13" 2012 running latest version of OSX ML(10.8.2).

    Read the article

  • telnet - is there a maximum line limit?

    - by benc
    I am working on several servers that use HTTP for transport of commands. What I have encountered is that some of the commands I am trying to issue by hand are very long GETs, several lines, and that when I telnet from my Mac to my Solaris system, I cannot seem to cut and paste the line successfully. I get a couple bounching sounds (which I assume is a control-g - bell) and then it never pastes everything. From trying to break it up into smaller pieces, I am getting the impression that TELNET, or my bundled telnet client or server has a maximum line length that I had never bumped into. I did some googling and superusering, but did not find anything definitive.

    Read the article

  • php registration form - limit emails [closed]

    - by Daniel
    i want to restrict certain emails to my website. an example would be that i only want people with gmail accounts to register to my website. { /* Check if valid email address */ $regex = "^[_+a-z0-9-]+(\.[_+a-z0-9-]+)*" ."@[a-z0-9-]+(\.[a-z0-9-]{1,})*" ."\.([a-z]{2,}){1}$"; if(!eregi($regex,$subemail)){ $form->setError($field, "* Email invalid"); } $subemail = stripslashes($subemail); } this is what i have so far to check if its a valid email.

    Read the article

  • Limit user to one CPU core in windows 7

    - by Dean Ward
    I let my family members use my pc over rdp to play their flash-based games as their laptops overheat if they use them directly. I have it setup so I can use the pc at the same time as them. The pc has a quad core cpu and I would like to be able to assign one of those cores to the user logged in via RDP so that the other 3 cores are left alone. Is this possible? They login via a specific user account setup for the purpose. Thanks for any advice!

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • NK2 files doesn't keep the email addresses in memory

    - by r0ca
    When I send an email to someone outside the firm, when I only type the first letters of its name (Contact), I get the auto-suggest of the "Already-sent" users. So now, since a few days, the emails are not kept in memory by Outlook (NK2 file). I see that that file is only 2kb and on my old machine, it's almost 200kb (So a lot more email addresses kept in memory) Should I just rebuilt the Outlook profile or the whole Windows Profile? A simple Outlook reinstall or to build a new PC?

    Read the article

  • Fail2ban memory usage

    - by ltsstar
    Since my server is under a sustain DNS amplification attack (DDOS), I configured fail2ban and initially my outgoing traffic dropped markedly. Anyway, after a few hours (mostly +10), fail2ban uses about 75% ram and seems to be crashed in some way, because the outgoing traffic raises imediatly after. When I searched the web for the memory problem, I found some people complaining about high fail2ban memory usages as well. But the recommended solution, to insert an ulimit command into a fail2ban config file, did not change that much for me.

    Read the article

  • Limit unix users file access

    - by Michael
    Hello, I just created a new user on my server, but I only want this user to have access to var/www/ and all the files/folders inside that. They should be able to access no other files on the server except those. How would I do this? Thanks!

    Read the article

  • Limit access on Apache 2.4 to ldap group

    - by jakobbg
    I've upgraded from Ubuntu 12.04 LTS to 14.04 LTS, and suddenly, my Apache 2.4 (previous: Apache 2.2) now lets everybody in to my virtual host, which is unfortunate :-). What am I doing wrong? Anything with the Order/Allow lines? Any help is greatly appreciated! Here's my current config; <VirtualHost *:443> DavLockDB /etc/apache2/var/DavLock ServerAdmin [email protected] ServerName foo.mydomain.com DocumentRoot /srv/www/foo Include ssl-vhosts.conf <Directory /srv/www/foo> Order allow,deny Allow from all Dav On Options FollowSymLinks Indexes AllowOverride None AuthBasicProvider ldap AuthType Basic AuthName "Domain foo" AuthLDAPURL "ldap://localhost:389/dc=mydomain,dc=com?uid" NONE AuthLDAPBindDN "cn=searchUser, dc=mydomain, dc=com" AuthLDAPBindPassword "ThisIsThePwd" require ldap-group cn=users,dc=mydomain,dc=com <FilesMatch '^\.[Dd][Ss]_[Ss]'> Order allow,deny Deny from all </FilesMatch> <FilesMatch '\.[Dd][Bb]'> Order allow,deny Deny from all </FilesMatch> </Directory> ErrorLog /var/log/apache2/error-foo.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access-foo.log combined </VirtualHost>

    Read the article

  • Limit WSUS replication to only certain product classifications

    - by MDMarra
    I have four WSUS 3.0 SP2 servers that are geographically distributed. The server at our main site (we'll call it WSUS1), is the main WSUS server. All manual and auto-approvals happen here. The other three WSUS servers are replicas of this server. Currently, we are only controlling desktop OS updates through WSUS. I would like to control server OS updates through WSUS as well. There is no need for all of these server updates to be on WSUS servers at the remote sites. The only server that would need a copy of them is WSUS1. Is there a way to keep my current infrastructure as-is and add server OS updates only to WSUS1, even though the others are set up as replicas, or will I need to configure an additional WSUS server that's not replicated?

    Read the article

  • Memory upgrade for Toshiba P20 S203?

    - by pjc50
    I've had an offer of a 256MB PC2700 SODIMM, apparently from an iBook, to upgrade a Toshiba laptop. Is that suitable? I've seen "DDR 266 SODIMM" on sale as the official upgrade memory. How in general should I work this out? I've long since lost track of what memory goes with what system.

    Read the article

  • Limit NFS block size from server side?

    - by paulw1128
    Is it possible to enforce a maximum rsize/wsize in nfsd? I'm having issues related to IP fragmentation (yes, I'm stuck with NFS-over-UDP, contrary to the warnings in the manpage), and have no practical access to the client mount command (buried in one of many TFTP boot images). http://nfs.sourceforge.net/nfs-howto/ar01s05.html lists a kernel source parameter limiting the maximum block size, but I'm not gong to get away with recompiling the nfsd kernel module so that's not really an option either :-(

    Read the article

  • Enabling Squid delay pool eats up the entire memory

    - by Supratik
    I am using "squid-3.1.8-1.el5" in my CentOS 5 32 bit system. In normal condition Squid uses 85m - 90m, but when I enable the delay pool parameters the memory usage suddenly rise up 2GB. The memory keeps on increasing until the system is out of resource. The following are my delay pool settings: delay_pools 1 delay_class 1 1 delay_access 1 allow all delay_parameters 1 192000/192000 Is there anything I am missing here or is it a bug with Squid ?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >