Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 399/1952 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • Why can't I install MySQL on my computer?

    - by Bea
    I have read a lot of tutorials, but I am still having problems. What I tried: I downloaded mysql-5.5.9-winx64. All that I read says that I can run Setup.exe, but there is no such file in the download. The other option I know there is, is including \mysql-5.5.9-winx64\bin in the PATH variable and then trying to execute the mysql command. When I do that, the error I get is: ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (10061) I then downloaded mysql-5.5.9-winx64.msi, which is easier to install, but once I followed the instructions and it was installed, I got the same error executing the mysql command. How can I use MySQL? EDIT: I've now removed everything I installed, and I want to start from scratch.

    Read the article

  • check_postgres_checkpoint plugin error

    - by Iliyas
    I am using the check_postgres.pl plugin for Nagios. I am trying to monitor how long since the last checkpoint has been run using the check_postgres_checkpoint option. When I run the command from CLI as root I am getting the output but I am not able to get the output in the Nagios web interface. The error which it shows is, ERROR: pg_controldata could not read the given data directory: "/opt/PostgreSQL/9.1/data" It is trying to access the pg_control file in the 'global' directory present beneath the data directory which has only read access to the postgres user. Can anyone please suggest me how this can be resolved ? Thanks.

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • ACL permissions not behaving as expected

    - by Yarin
    I set the following ACL on my web directory: setfacl -R -d -m mask:002 /var/www and then created a file as root that I expected to be readable by the default (apache) group. -rw--w-r--+ 1 root apache 0 Dec 17 22:32 newfile.py When I run getfacl on the file, I get: # file: newfile.py # owner: root # group: apache user::rw- group::rwx #effective:-w- mask::-w- other::r-- I'm not sure how to read this- but all I know is that the webserver is throwing a permissions error because apache can't read the file. Can anyone explain what is going on here?

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • Tell the linux kernel to put a file in the disk cache?

    - by Rory
    Is there any command to for a file to be read in and loaded into the linux disk cache? This is on an up-to-date debian system. I know in the general case, it's better to let the linux kernel figure this out. But I have an edge case. I have a laptop that has an NFS director mounted, and i want to play a long video file, but I don't want to have a network problem interrupt the playnig. I know that (largeish) file will be read in it's entirety later on. I know that nothing else (really) will be running while playing this video. There is enough free memory to store this file. (I know I could just copy the file into a new tmpfs filesystem, but I'm curious if there's an even shorter way to do it)

    Read the article

  • How to show images in structure view in word 2010?

    - by Zonder
    I use a lot word with in structure view. In that view it is not possible to see images (while it was possible in 2007). When I paste an image in structure view it automatically changes the view to Print Preview. Is this a limitation introduced in 2010? If not how to get rid of it? I tried to read all the options, but I didn't find a matching checkbox. NOTE FOR BOUNTY: I started a bounty because this problem is really annoying for me. Please read the existing answer(s) and comment(s) before answering. Thanks.

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • which user is the website host

    - by Kossel
    I m learning about server, and I'm configuring nginx mysql php wordpress. the server distro is debian 6. I created a new user and I wish each user is the owner of the site folder /var/www/site.one so I chown -R kossel:kossel site.one my problem is, my wordpress only work if I chmod 644 wp-config.php, which all can read wordpress site suggest that file should be 640. and my question is: when someone open mydomain.com, wordpress has to access wp-config.php file, but which user is it actually using to "read" that file? root? user kossel? anyone else? how can I properly give it permission or owner??

    Read the article

  • Share folder with active directory group permissions

    - by Hihui
    I have a Debian as a member of our AD (which is a 2k3). I want to share 2 folders from our Debian. 1 with full access for everyone, the second only readable by group "ADM", and "PROD". Part of smb.conf: [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL netbios name = SERV-FTP wins server = "IP serv 2k3" security = domain [JUKEBOX] // full access path = /media/JUKEBOX/JUKEBOX comment = sharing writable = yes browsable = yes public = yes read only = no valid users = @ASYLUM\prod_std admin users = @ASYLUM\ADM [SOFTWARE] comment = Software path = /media/JUKEBOX/SOFTWARE valid users = @ASYLUM\prod_adv, @ASYLUM\ADM writable = yes read only = no My log : [2013/10/25 09:24:37.316643, 0] smbd/service.c:1055(make_connection_snum) canonicalize_connect_path failed for service SOFTWARE, path /media/JUKEBOX/SOFTWARE And, from my Windows's client, if i want to access on that folder : Windows can't access to \serv-ftp\software Where is the problem ... ? Thx !

    Read the article

  • How can I make a non-destructive copy of a (NTFS) partition?

    - by violet313
    I want to recover some deleted files from a healthy NTFS partition on an undamaged hard-disk. In order to leave the partition undisturbed, i plan to use dd to clone the partition to a raw image file & then attempt recovery from that mounted clone. Will dd if=/dev/sd<xn> of=/path/to/output.img perform a non-destructive copy ? Is attempting a restore from a clone using dd the best approach? [edit, wrt Deltiks answer, i need to be a bit clearer about what i'm asking] eg: are there some s/w that can do something more with the original sectors ? eg: if it was a damaged hard-disk i am aware that any kind of read is potentially destructive. but assuming my disk head is not going to suddenly spaz out etc, am i reducing my chances of a successful recovery (at any cost) by using an apparently non-destructive single read of my undamaged hard-disk. (btw: i am planning on using ntfsundelete & testdisk for recovery)

    Read the article

  • Linux/OS X dualboot on a Macbook Pro with RAID

    - by GaretJax
    I'd like to install Gentoo Linux on my Macbook Pro by keeping my current OS X installation. I currently have OS X installed on a RAID 0 on two 160GB Intel SSDs and I'd like to create a new partition for Gentoo alongside OS X without losing the RAID setup but, from what I read on the net, Apple's software RAID is poorly (read "not at all") supported: BootCamp refuses to create a windows partition on a RAID volume rEFIt is not able to boot an OS from a software RAID even Apple's recovery partition for Lion can't be created on a RAID volume Is there a way to dual boot my Macbook while keeping the RAID solution?

    Read the article

  • BASH_ENV ignored on solaris?

    - by Peeter Joot
    In my .bash_profile, executed for both my interactive and non-interactive logins are BASH_ENV=$HOME/.myinteractivestuff export BASH_ENV doing this for bash on Linux works fine, but on Solaris is not sourced: bash --version GNU bash, version 3.00.16(1)-release (sparc-sun-solaris2.10) Curiously, if I invoke screen within my login shell, BASH_ENV is then read. Are any restrictions on when $BASH_ENV is respected on Solaris? In my case I'm logging in with ssh using putty, but also tried unix to unix ssh, and telnet and see the same. Note that I know that my BASH_ENV variable assignment is being executed since I can echo this variable after login without any trouble (ie: ruling out the obvious possibility that my .bash_profile is also not being read).

    Read the article

  • How to show images in outline view in word 2010?

    - by Zonder
    I use a lot word with in outline view. In that view it is not possible to see images (while it was possible in 2007). When I paste an image in structure view it automatically changes the view to Print Preview. Is this a limitation introduced in 2010? If not how to get rid of it? I tried to read all the options, but I didn't find a matching checkbox. NOTE FOR BOUNTY: I started a bounty because this problem is really annoying for me. Please read the existing answer(s) and comment(s) before answering. Thanks.

    Read the article

  • .htaccess with addondomain and https ssl

    - by admon
    I have main domain and addon domain. Question. 1)When surfing to: ftp.addondomain.com or mail.addondomain.com For some reason it goes to the main domain. (normally this should not be problem but i still want completely separation) Do you know the syntax to redirect in the .htaccess file this: (.*).addondomain.com - addondomain.com and where do i put the code? in the addondomain .htaccess or in the main domain attaccess I.E any_words.addondomain.com should be forwarded to the addondomain.com so these: dsdhf.addondomain.com ftp.addondomain.com mail.addondomain.com ... all will be forwarded to: addondomain.com (i.e without the prefix). 2)Same question for https:// Main domain has SSL addon domain does not have ssl. For some reason when surfing to: https:// addondomain.com you get to: http:// maindomain.com (the address bar shows https:// addondomain.com but the site pages - the page you see is the page of the main domain) I would like that if user surfs to https:// addondomain.com then (since there is no ssl for the addon domain) then user will get to: http:// addondomain.com Or alternatively user will get error message. I do not want him to be redirected to the main domain. Please if you can, write me what to add to the .htaccess and i will add it. Please also let me know where to write the code. I.E in the addondomain .htaccess or in the main domain attaccess Thanks.

    Read the article

  • Windows server 2003SP2 as LDAP replica master for Mac OSX 10.6

    - by FrancoR
    Hello there, we have a single domain controller with Windows 2003 with few child. All the users are in the main DC. We have already created a connection from AD to Mac Xserve 10.6 and can read all the users, but: 1. If the DC goes down (or the net), Mac lose all the users, so no file access, no emails, no nothing. 2. the users are in read only. Mac admin cannot reset password, change attribute and so on. What we need is a stable environment where both AD admins and LDAP admins can manage the users; if one server goes offline the users of the other server should work (email, shared folders) just fine. Thanks in advance P.S. we already tried to connect the MacOSX to Windows LDAP, instead of AD, but we're unable to do it: MacOSX requires DNS IP (gotcha), user admin and password (ok) and a root LDAP password we're unable to find any reference of it in Windows 2003.

    Read the article

  • Cross-platform centralized desktop password manager

    - by Dave
    I have been using KeePass as a desktop password manager on Windows for many years. Love it! However, I am now needing to work on different platforms much of my day (Windows 7, Windows XP, Mac OS X, Ubuntu, and OpenSUSE.) I'm looking for a password manager I can share across all these platforms. My ideal solution would: Run natively (not in a virtual machine) on all platforms. Store the "official" copy of the password data on a local network so I can get to it from any and all machines. It is OK if it locks (or becomes read-only) when one client is accessing it. Keep a local cached copy (read-only is fine) so I can still get to my passwords when disconnected from the network. Does any such beast exist?

    Read the article

  • Experience with Intel X25-M 160GB and Oracle

    - by derobert
    We're considering building an Oracle database with 12 Intel X25-M G2 160GB drives in software RAID10. It'd be running Linux. Database gets some very heavy write activity during the early morning data load, other than that it is mostly read-only (and the read load is fairly minimal). We're currently running on 11 150GB Velociraptors (also Linux software RAID10), and are hoping the X25-M will speed up the data load. We currently have redo on different disks than the rest of the data. I'm wondering a few things: Any experience with using X25-M drives for databases? The X25-E are unfortunately beyond our budget. Would it hurt to separate redo off to some magnetic (non-SSD) drives, say 2 (raid1) or 4 (raid10) Seagate Constellations?

    Read the article

  • Access MacZFS via network from XBMC

    - by AreusAstarte
    I have a ZFS RAID (zpool with three drives) hooked up to my Mac that I want to share in my LAN so that the XBMC client on my OUYA console hooked up to the television can read the drive and use it to stream my movies and television shows onto the television set. I've searched around for a bit but so far haven't found anything that helped me with it. I know that when connecting to the Mac with SSH I can't just access the drive due to different formatting. What do I have to do so that XBMC will be able to read it? How do I share it?

    Read the article

  • Is there an simple but good To Do Manager app for the Mac?

    - by Another Registered User
    Every morning I think about what I am going to do today. So I take a paper and start to write things like: [ ] Call Mr. XYZ [ ] Answer Support E-Mails [ ] Reduce website header height by 20 px [ ] Create new navigation bar icons And every time I'm done with something, I paint a checkmark in this square. On paper. It would be fun to have something like this as an application. But I don't want a heavy project management tool or integration with email. It should be like download, install, use without fat configuration and steep learning curve. usually I don't schedule my to do's, I just write down every day what I want to accomplish today. For my experience it doesn't make sense to plan what to do next week, because next week everything looks totally different. Would be cool if such a simple utility exists. At the moment I try just using textEdit and deleting rows which are done. With a nice interface, this would be much more fun.

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • How do I get these permissions working right so Apache can work with the files?

    - by cosmicbdog
    I am having a go at setting up my own Apache and can't seem to get my head around the permissions. Lets say I grab a file from somewhere off the web and it has permission of 600. I then upload this file via ftp to a user directory, which is also an apache virtual site, and so this file retains this permission of 600. This means that the user can read this file, but Apache can't: it will be forbidden. What is the most simple solution so that apache can read + write whatever files end up in the users directory? Can apache be granted some sort of root power over files in a directory?

    Read the article

  • How can I get (g)Vim to display the character count of the current file?

    - by OwenP
    I like to write tutorials and articles for a programming forum I frequent. This forum has a character limit per post. I've used Notepad++ in the past to write posts and it keeps a live character count in the status bar. I'm starting to use gVim more and I really don't want to go back to Notepad++ at this point, but it is very useful to have this character count. If I go over the count, I usually end up pasting the post into Notepad++ so I can see when I've trimmed enough to get by the limit. I've seen suggestions that :set ruler would help, but this only gives the character count via the current column index on the current line. This would be great if I didn't use paragraph breaks, but I'm sure you'd agree that reading several thousand characters in one paragraph is not comfortable. I read the help and thought that rulerformat would work, but after looking over the statusline format it uses I didn't see anything that gives a character count for the current buffer. I've seen that there are plugins that add this, but I'm still dipping my toes into gVim and I'm not sure I want to load random plugins before I understand what they do. I'd prefer to use something built in to vim, but if it doesn't exist it doesn't exist. What should I do to accomplish my goal? If it involves a plugin, do you use it and how well does it work?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >