Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 337/425 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • How to enabled Printer Sharing on Web Server 2008?

    - by FarrEver
    I am installing Web Server 2008 for my home network. I have 2 USB printers that I am connecting to this machine and want to share these printers so that my other machines can print to these 2 USB printers. (I previously had Win Server 2003 on this machine and was able to share both printers fine.) File and Printer sharing Inbound Role for my Private network is enabled, when I go into Network and Sharing Center and try to turn ON Printer Sharing, it never sticks. It always stays on OFF. I go to my installed printers and try to Share them and get the following error message: Printer Settings could not be saved. Remote connections to the Print Spooler are blocked by a policy set on your machine. I have not been able to find a policy on my machine that is preventing this. I have searched a lot over the past few days and most of the results say what I have done should work and there are also a number of search results that say Printer Sharing on Web Server 2008 is not allowed and you have to hack it. Has anyone installed Web Server 2008 and shared printers before? If so, what are the detailed steps you took to get this to work?

    Read the article

  • Java VM problem in OpenVZ

    - by Ginnun
    Hi, I bought a vps for hosting my java needs. But I can't run java on it. Everything about java is correctly installed but when I try to run java ("java -version" forexample) I get this error : Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I don't think this is a java centered problem, Out of memory for sure. I contacted the vps admin, but he says everything is fine, you have 2gb ram, expandable to 4gb! I did a bit search on the subject, here is my BEANS file (numbers converted to humanredable form using a script). By the way do JVM heap memory allocs count on kmemsize or privvmpages ? How much ram does that configuration allows me to allocate with jvm for a single process? resource held maxheld barrier limit failcnt kmemsize 2.25 mb 2.35 mb 13.71 mb 14.10 mb 0 lockedpages 0 0 1024.00 kb 1024.00 kb 0 privvmpages 20.54 mb 21.33 mb 256.00 mb 272.00 mb 156 shmpages 5.00 mb 5.00 mb 84.00 mb 84.00 mb 0 numproc 13 14 240 240 0 physpages 9.36 mb 9.45 mb 0 MAX_ULONG 0 vmguarpages 0 0 132.00 mb MAX_ULONG 0 oomguarpages 9.36 mb 9.45 mb MAX_ULONG MAX_ULONG 0 numtcpsock 3 3 360 360 0 numflock 3 3 188 206 0 numpty 2 2 16 16 0 numsiginfo 0 1 256 256 0 tcpsndbuf 69.17 kb 69.17 kb 1.64 mb 2.58 mb 0 tcprcvbuf 48.00 kb 48.00 kb 1.64 mb 2.58 mb 0 othersockbuf 6.80 kb 6.80 kb 1.07 mb 2.00 mb 0 dgramrcvbuf 0.00 kb 0.00 kb 256.00 kb 256.00 kb 0 numothersock 9 10 360 360 0 dcachesize 0.00 kb 0.00 kb 3.25 mb 3.46 mb 0 numfile 704 746 9312 9312 0 numiptent 10 10 128 128 0 Thanks in advance!

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • Different approaches to share files over local network & playlists "collaboration"

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • How to replace the domain name in a Wordpress database?

    - by Cristian
    I have a Wordpress database which was installed in a development environment... thus, all references to the site itself have a fixed IP address (say 192.168.16.2). Now, I have to migrate that database to a new Wordpress installation on a hosting. The problem is that the SQL dump contains a lot of references to the IP address, and I have to replace it with: my_domain.com. I could use sed or some other command to change the that from the command line, the problem is that there are a lot of configuration data which uses JSON. So what? Well, as you know, JSON arrays uses things like: s:4: to know how many chars an element has, and thus, if I just replace the IP with the domain name, the configuration files will get corrupted. I used an app for Windows some years ago that allows to change values in a database and takes care of the JSON arrays. Unfortunately, I forgot the name of the app... so the question is: do you know any app that allows me to do what I want?

    Read the article

  • Why do my backup fail when I target a network share hosted by a Synology DS211 disk station?

    - by Larry
    My backups are failing when I try to use a network share hosted by a Synology DS211 disk station. They work fine if I target a different network share (i.e. \server1\data\larry). When I run the following command: Wbadmin start backup -backupTarget:\\diskstation\backup-larry -include:C: This is what I get: wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Note: The backed up data cannot be securely protected at this destination. Backups stored on a remote shared folder might be accessible by other people on the network. You should only save your backups to a location where you trust the other users who have access to the location or on a network that has additional security precautions in place. Retrieving volume information... This will back up volume WIN7(C:) to \\diskstation\backup-larry. Do you want to start the backup operation? [Y] Yes [N] No y Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it. The backup operation to \\diskstation\backup-larry is starting. Creating a shadow copy of the volumes specified for backup... Creating a shadow copy of the volumes specified for backup... The backup operation stopped before completing. Summary of the backup operation: ------------------ The backup operation stopped before completing. Detailed error: Access is denied. Windows Backup failed to write the file: '<backup location>\WindowsImageBackup\<Computer Name>\MediaId'. Access is denied. The backup creates the following path \\diskstation\backup-larry\WindowsImageBackup\LARRY-MYDOMAIN\ but its empty. I definitely have read/write access on the target directory (\diskstation\backup-larry). I have verified this by looking at the permission and by actually copying files to this location. Any suggestions?

    Read the article

  • Cisco ASA dropping IPsec VPN between istself and CentOS server

    - by sebelk
    Currently we're trying to set up an IPsec VPN between a Cisco ASA Version 8.0(4) and a CentOS Linux server. The tunnel comes up successfully, but for some reason that we can't figure out, the firewall is dropping packets from the VPN. The IPsec settings in the ASA sre as follows: crypto ipsec transform-set up-transform-set esp-3des esp-md5-hmac crypto ipsec transform-set up-transform-set2 esp-3des esp-sha-hmac crypto ipsec transform-set up-transform-set3 esp-aes esp-md5-hmac crypto ipsec transform-set up-transform-set4 esp-aes esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map linuxserver 10 match address filtro-encrypt-linuxserver crypto map linuxserver 10 set peer linuxserver crypto map linuxserver 10 set transform-set up-transform-set2 up-transform-set3 up-transform-set4 crypto map linuxserver 10 set security-association lifetime seconds 28800 crypto map linuxserver 10 set security-association lifetime kilobytes 4608000 crypto map linuxserver interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes hash sha group 2 lifetime 28800 crypto isakmp policy 2 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 3 authentication pre-share encryption aes-256 hash md5 group 2 lifetime 86400 crypto isakmp policy 4 authentication pre-share encryption aes-192 hash sha group 2 lifetime 86400 crypto isakmp policy 5 authentication pre-share encryption aes-192 hash md5 group 2 group-policy linuxserverip internal group-policy linuxserverip attributes vpn-filter value filtro-linuxserverip tunnel-group linuxserverip type ipsec-l2l tunnel-group linuxserverip general-attributes default-group-policy linuxserverip tunnel-group linuxserverip ipsec-attributes pre-shared-key * Does anyone know where the problem is and how to fix it?

    Read the article

  • How to fake ip at localhost without LoopBack.

    - by sexer
    How can i fake an ip on my own PC? for example if there were an ip address lets say 201.91.81.71, that Host is somewhere outside of my red and is hosting a webserver. How can set a website on my own PC, and when i go to browser and try to explore 201.91.81.71 it actually explore the website at my own PC? pd: I need it with IP addresses not domain names, since I need to implement it on a non-web service. First guess was installing a LoopBack with 201.191.81.71 as ip, but since some times the subnet works and some other it doesn't isn't a stable solution. Second guess was adding a route to route table : route add 201.91.81.71 mask 255.255.255.255 192.168.1.2 192.168.1.2 is the ip address of my NIC. If i could add this route it would work but windows doesn't let me do so. route add 201.91.81.71 mask 255.255.255.255 127.0.0.1 it doesn't let me set as gateway 127.0.0.1 if 201.91.81.71 isn't set in a NIC, so thats why i set sometimes loopback and this route add is auto, but it needs a subnet mask which doesn't match the ip and cannot set 255.255.255.255, im in real throubles here. can i get some help? thx.

    Read the article

  • Xen domain migration locking problem

    - by brodie
    I am trying to live migrate a VM (domain) between two Xen servers. I have xen locking (xend-domain-lock = yes) configured with a ocfs2 shared storage between them. This locking is working fine. If I try to start up the VM on the secondary server it refuses to start (which is correct). The problem I am having is when trying to do live migration, it seems like it is trying to remove the lock twice. The first lock it removes is for "domain test", the second is for "migrating-test" which does not exist. Should their be a lock for this "migrating-test" VM? These are the relevant options in the xen config file: (xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-address '') (xend-relocation-hosts-allow '') (xend-domain-lock yes) (xend-domain-lock-path /var/lib/xen/lock) This is the section of the log: [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain test [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) SUSPEND shinfo 000c6ceb [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) delta 21ms, dom0 95%, target 0%, sent 57Mb/s, dirtied 173Mb/s 111 pages 4: sent 111, skipped 0, delta 6ms, dom0 100%, target 0%, sent 606Mb/s, dirtied 606Mb/s 111 pages [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Total pages sent= 131295 (0.99x) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) (of which 0 were fixups) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) All memory is saved [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Save exit rc=0 [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:123) Domain 22 suspended. [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=22 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2227) Destroying device model [2010-06-10 10:45:58 14488] INFO (image:567) migrating-test device model terminated [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2234) Releasing devices [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vif, device = vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vkbd, device = vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = console, device = console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain migrating-test [2010-06-10 10:45:59 14488] ERROR (XendDomainInfo:4070) Failed to remove unmanaged directory /var/lib/xen/lock/b01515ae-9173-03cb-0cb7-06f3dfbede8b.

    Read the article

  • Different approaches to share files over local network

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • Backing up server and multiple clients

    - by inquam
    I'm running a Amahi server. It's basically a Fedora14 x64 installation. I'm looking for a good solution to backup my 200GB system drive on the server to an external USB/eSATA drive every night. I looked into using dd but since other things might be running on the server at the same time it didn't feel quite safe. I would like the backups to be incremental so the following backups after the initial one would be quite fast. The backup should also be bootable or prehaps be able to produce a bootable disk after booting from a CD or something. I would also like the server to be able to do similar backups of my clients running Ubuntu, Windows 7 x64, Windows 7 Starter, OSX Lion, Windows XP and so on. So no applications backing up only shared folders or something like that. My guess is a client daemon would have to exist that would lock the system to allow backup of a Windows system drive that can otherwise be quite cranky. Booting up a CD in a crashed client and connecting to the server restoring the latest backup and being up running is my ideal goal. Is there anything out there that would fit these needs?

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Setting up Windows network on Xen

    - by samyboy
    I'm trying to install a Windows XP server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny currently hosting around 10 Linux virtual servers. Windows tells me I have a "limited connection". It can't get any DHCP response, nor access other hosts in the network Here is the Xen's client config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '1024' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' Server config (/etc/xen/xend-config.sxp) (network-script network-bridge) (network-script network-dummy) (vif-script vif-bridge) (dom0-min-mem 512) (dom0-cpus 0) (vnc-listen '0.0.0.0') Since I use Debian I had to create a link like this: /etc/xen/qemu-ifup - /etc/xen/scripts/qemu-ifup What did I do wrong? Please tell me if you want some more info (logs, etc)

    Read the article

  • Configuring CakePHP on Hostgator

    - by yaeger
    I have absolutely no idea what I am doing wrong here. I have followed just about every guide there is with installing cakephp on shared hosting and I am still having problems. I have also started over each time when following a guide. Maybe someone can help me out here as I am out of options. Here is my current setup: / app webroot vendors lib cake public_html .htaccess index.php plugins I have configured the index.php file in the public_html to point to the correct files. I have also done this in the index.php file located in webroot folder. I am getting an Internal 500 server error and it says to check my logs for what the error specifically is. However there are no logs being generated. I removed the .htaccess file from the public_html folder and I get the following errors: Warning: require(/app/webroot/index.php) [function.require]: failed to open stream: No such file or directory in /home/user/public_html/index.php on line 40 Fatal error: require() [function.require]: Failed opening required '/app/webroot/index.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/user/public_html/index.php on line 40 line 40 is require APP_DIR . DS . WEBROOT_DIR . DS . 'index.php'; DS = "/" WEBROOT_DIR = "app" Anyone have any suggestions? I am at lost at what I am doing wrong.

    Read the article

  • switching dns server providers

    - by Yoav Aner
    I'm trying to wrap my head around something that I thought I kinda understood, but clearly there's some piece missing. We're currently using Zerigo as our primary dns, with slave dns running on linode. This works quite well. However, recent DDOS attacks on zerigo meant that whilst dns queries were still resolved, we were unable to make any dns changes. Since we rely on dns changes on our own infrastructure, I'm looking to improve this somehow. I'd rather not ditch zerigo completely, and realise that this or similar problems can happen with ANY primary dns hosting provider. It might not be DDOS, but a bug on their server, or something that means we can no longer issue updates. For this I want to have some fallback option: a completely independent (primary) dns provider (maybe AWS), which we will keep in-sync manually. We will switch-over to it when there's a problem. This brings me to my question: How do I make sure we can switch those providers quickly enough? specifically, on our registrar, there's a list of name servers, but no settings like TTL etc. How do dns clients know to use the newly updated name server records? Is this configured in the SOA? However, the SOA itself is hosted with the dns provider and we might not be able to update it... This is not a question about a one-time move, which can be planned and scheduled and tested, but rather to be able to do so when things are half-broken.

    Read the article

  • Sharp mx2600 driver cannot see all the paper trays

    - by Frank R
    I am wondering if someone might have experienced this problem and have a solution. I have just taken over a network about a month ago so it is hard to know what truly worked and didn't work. However the end users reported that they used to be able to choose the tray they wanted to print from in the printer preferences in the driver for the Sharp MX2600n device then send the print job. Now they only have two trays to choose from instead of the four trays that exist. The only thing I have changed that might relate to this network device is the Admin password on the domain controller. I haven't made any changes to the Sharp device or the driver, until recently when I tired to update the driver for this device but that didn't fix anything. The domain controller is a Server 2008 VM running in Hyper-visor and it is pushing the shared printer out to the domain. I have looked at the web interface to manage the device and I can see all four paper trays in there, however if I try to see the number of trays from the driver interface, I can only see the top two paper trays and there should be four of them. This is the same with the admin account on the server or a workstation inside the domain, also a standard user account see's the same result. I have even tried updating the tray status within the driver to show more trays, but that doesn't help. I am hoping there is someone out there that has experienced this problem and knows a solution.

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Corrupt install of Ruby?

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'digest' then that works fine. Just a couple of days ago I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

  • PC can't detect second RAM installed

    - by kulwinder
    I have PC with 512 MB RAM installed (motherboard manufacture MICRO STAR, chipset P4M800), pc was running very slow so I decided to upgrade the ram. I installed CPU-Z and check the ram installed on the machine, also had a look at the stick installed. 512 MB PC 3200 400 MHz DDR but my mother supports 200 MHz and it was working ok. So I bought 2GB which I checked on manual that it support upto 2 GB Ram. So I installed 2GB PC 3200 400 MHz same as the old one, I plugged in both eventhough motherboard only support upto 2 GB but system spec only shows 512 (deducts 64 MB shared vga memory) I checked on CPU-Z, it detects both, slot 1 512 MB, slot 2 2048 MB, comparing screen for both slots, both the same, volt 2.5, frequency 166 MHz and 200MHz, only difference on those is 2gb ram shows under timings table 133MHz 166 MHz and 200MHz but 512 MB shows only 166MHz and 200MHz. I checked on Google and can't seems to figure out whats wrong with it. If I only plug in 2GB. Pc doesn't boot up like ram not working.With only 512 MB plugged in seems ok. Please help.

    Read the article

  • Write permissions LAMP (Debian Lenny)

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Debian: What are these files in /sys/devices/pci0000:00/ for?

    - by muhuk
    I am running Debian Squeeze on an MSI M670 laptop. I have these following files on my root drive, each 256MB: file:///sys/devices/pci0000:00/0000:00:05.0/resource1 file:///sys/devices/pci0000:00/0000:00:05.0/resource1_wc Here is my lspci output: muhuk@debian:~$ lspci 00:00.0 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.2 RAM memory: nVidia Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: nVidia Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: nVidia Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: nVidia Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: nVidia Corporation C51 Memory Controller 2 (rev a2) 00:03.0 PCI bridge: nVidia Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: nVidia Corporation C51 [GeForce Go 6100] (rev a2) 00:09.0 RAM memory: nVidia Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: nVidia Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: nVidia Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: nVidia Corporation MCP51 PMU (rev a3) 00:0b.0 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: nVidia Corporation MCP51 IDE (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:0f.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:10.0 PCI bridge: nVidia Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: nVidia Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 04:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 04:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 04:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 04:09.0 Network controller: RaLink RT2561/RT61 rev B 802.11g I am speculating these have something to do with the shared RAM my GPU is using. But why a file on disk? And why two of them?

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >