Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 722/969 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • 3dmark score abnormal

    - by Sean
    I just bought a new laptop. It is core i7 3610QM.(Ivy bridge, 4 core plus hyperthreading), 8G ram, nvidia GT 610M 2GB. The OS is Win7 64bit. I ran 3dmark 11, but the score is frustrating. For 1280*720, the score is P730. It is too low right? I searched online, the score should at least above 1000. Am I right? I never use this software before. I know nvidia has optimus, so I made the laptop in high performance state and white-listed the 3dmark program. But there is no help. I am guessing 3dmark is using i7's graphic module. It cannot transform to nvidia. In the running detail of 3dmark, the graphic card cannot be identified(The row remain blank). Can anybody tell me is this the normal case? If not, can I use some other software to test if my nvidia card is working fine? If this nvidia card cannot work, I will return the new laptop asap. Thanks.

    Read the article

  • Terminal Server CPU usage at 100%

    - by Light1c3
    I'm running a terminal server with around 50-60 users,and every so often the server will go from 40% usage to 100%. I took a closer look an it seems every time this happens, a different user or two seem to be caught in a loop and end up using < 30% where the rest of the users only use a maximum of 5%. The company behind the software we use clame it's due to the servers inadequate hardware (It's a VM system running on a dual - quad core setup) which to me sounds like BS! I'm fairly new to this level of IT so if I misspoke I apologize. I have no way to prove it but I believe adding more raw hardware power wont do me any good as this to me seems like a bug in their software, and it will suck up as much ( or little) CPU as it's given. The VM in question has 4 vCPU cores and 12 GB RAM available, and is running Windows Server 2008, 64-bit Thanks in advance for your help! Note: I have the same question posted on SO, but was pointed in this direction so just in case, here is a link to the post http://stackoverflow.com/questions/17276602/termserver-cpu-at-100

    Read the article

  • Custom PCI bracket with support, for custom PCB?

    - by newbiez
    I am considering to put a custom PCB card that I made, into my computer. It won't go on any PCI connector, it plugs in on a USB connector on the motherboard, via a ribbon cable. I need thou to plug a device to it; which means that either I leave the PCB outside the case, hanging by the ribbon (bad idea), or I could put it in a PCI slot, using a bracket. The issue is that the brackets that I have, do not have tabs, so I have no way to screw the PCB on them. I was hoping to find something that would allow me to put the PCB on it, and then just fit it in the PCI bracket opening, like this: http://www.idotpc.com/TheStore/pc/viewPrd.asp?idproduct=1203&idcategory=0 This one won't fit the bill since the holes are too close apart, compared to the one that I have already on the PCB (and can't make more holes). Do you know if there is a place where they make universal PCI bracket mounting systems for custom PCB? I just need one, so can't even order a custom one (they ask me 120 dollars for one). Thanks in advance!

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • PAM with KRB5 to Active Directory - How to prevent update of AD password?

    - by Ex Umbris
    I have a working Fedora 9 system that's set up to authenticate users via PAM - krb5 - Active Directory. I'm migrating this to Fedora 14, and everything works, but it's working too well :-) On Fedora 9, if a Linux user updated their password, it did not propagate to their Active Directory account. On Fedora 14, it is changing their A/D password. The problem is I don't want A/D to be updated. Here's my password-auth-ac: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 type= password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so -session optional pam_systemd.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so I tried removing the line password sufficient pam_krb5.so use_authtok But then when attempting to change the Linux password, if they provide their A/D password for the authentication prompt, they get the error: passwd: Authentication token manipulation error What I want to achieve is: Allow authentication with either the A/D or Linux password (the Linux password is a fall-back for certain sysadmin users in case A/D is unavailable for some reason). This is working now. Allow users to change their Linux passwords without affecting their A/D passwords. Is this possible?

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • openvpn in a bridge?

    - by sebelk
    I have a somewhat tricky proble to solve. We have a wireless link between 2 building. One of them has an mikrotik and below there are some vlans. Some machines of one vlan need to use openvpn to connect to a remote private lan. I put a TP-Link WR1043ND (which those machines connect to) with openwrt with ebtables just in case I need it. I've configured openwrt in such a way that all ports belongs to the same vlan. My idea was to make things as transparent as I can. It has a bridge as follows: usr/sbin/brctl-full show br-lan bridge name bridge id STP enabled interfaces br-lan 8000.f8d111565716 no eth0.1 eth0.2 Also I've added an ebtables rule: ebtables -t broute -A BROUTING -p ipv4 -j DROP So "bridge" has only one IP address. I've installed openvpn and I'm trying to bring up the tunnel but I can't still get working. Sure, someone can says why don't you use the vpn on the mikrotik, there are some reasons, the first one is I have little experience with mikrotik and I'd want to have the vpn at hand :) The problem is that openvpn is not working, because it is complaining that I have only one Ip Address on the server side. So I set up and alias interface with another IP address but is not working either: : Rejected connection attempt from IP-Client-Side:37801 due to --remote setting Is there a way to make it work?

    Read the article

  • Fixing Windows install by connecting its hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Fixing Windows install by connecting it's hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Setting up a home network

    - by Mat Richardson
    I'm trying to get my head around how to do this with the equipment I have. Here's how it is.. I have an old laptop that has server 2003 installed on it. This has no wireless NIC, but I have a spare wireless router that I can wire to it. We have a virgin wireless router which provides the home with internet access. What I would like to do it make resources on the server 2003 machine available to the rest of the devices in the home - specifically a couple of laptops running windows 7. Ideally I'd like to be able to use the laptop as a fileserver but maybe also offer application virtualization. In the first instance however, I'd like to create a file server with the laptop and allow connections to it by the other machines. Virtualisation is just a 'nice to have'... Given the hardware I have is this possible? Can you point me in the right direction how to do what I want if this is the case?

    Read the article

  • Disable all the idiot-checking in Mac OS X

    - by Fake Name
    I am a Windows/Linux user, who is learning Mac OS X out of interest in doing dev-work for the iPad which I recently purchased. However, OS X is driving me nuts by trying to protect all it's system files, hiding all of the important OS components I want to tweak, and generally making it impossible to do any modification to the OS in general to make it more usable. Therefore, is there a way to turn off all the idiot-checking in Finder? On XP, I can disable "Hide Protected Operating system files" and set "Show Hidden Files". On linux, there really aren't many hidden files, and changing the configuration for .files is easy enough in Gnome and XFCE. How can I set up OS X in a similar way. I am not new to computers, and I am fully aware that deleting system files can damage or even irreparably disable a OS install. Therefore, If I intentionally try to delete a file, or move something, it's probably intentional, and I am willing to accept the consequences in any case. At this point, I have fallen back to doing everything through the command line (which takes forever), because Finder is practically unusable. (As for what I am attempting to do, I also asked about GUI changes here.)

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • SSL Mail server connection times out on send()

    - by Jivan
    When trying to programmatically send an email from a website of mine, with PHP Pear Mail package with SSL connection, PEAR:Mail replies the following : Failed to connect to example.blabla.net:PORT [SMTP: Failed to connect socket: connection timed out (code: -1, response: )] I looked for similar questions on SO and SF, all the answers asking the OP to test a request on telnet or ssh in command line. So, that is what I did and here is what happens : $ ssh -l myusername -p PORT example.blablabla.net _ Here, '_' in the second line means that NOTHING happens. Indefinitely, which seems coherent with the timeout message I had from PEAR:Mail. So PEAR:Mail seems out of cause. But, what I have to tell you is that yesterday, it just worked. Connection was properly established, mails were properly sent, etc. Just today, it doesn't work anymore and I absolutely don't know why. I restarted Apache (in case an extension was broken), restarted mail services, etc. Still. No effect. Before yesterday (when it worked) and today (when it doesn't anymore), I just didn't touch the server and did nothing on it, simply because I took a day off to write some blog post! Have anyone of you encountered similar problem ? The problem seems quite common, judging after some googling, but the solution doesn't. Thanks for any help ! (note on config : CentOS 6.4 x86_64 with cPanel/WHM)

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • Can't access certain web sites - reset router, any ideas?

    - by IniTech
    EDIT: This problem was resolved by my ISP - had to do with damaged fiber in one of their locations. Thanks to everyone that helped. Not sure if this is the right site (I'm a StackOverflow user) so I thought I'd give it a shot. I'm having trouble connecting to certain sites on any of the 3 machines that are on my LAN. The following sites are returning "Problem Loading Page - The connection has timed out" Sourceforge.net CNet.com Microsoft.com OpenDNS.com even my company's webiste I was worried about possible malware/virus, but I don't think that is the case (given the inability to access my company's site and the fact that all 3 machines are having the same issues.) I've tried with IE8, FF, and Chrome I have reset my router (WRT54G) and my machine(s) multiple times. EDIT: It is also worth noting that this page spins constantly and no avatars show up (I'm assuming it is trying to access gravatar.com with no success.) EDIT: I have the same issues directly connected to the modem. So, any router config is probably not the issue I'm a programmer, not a network guy - any ideas?

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • How to deal with ssh's "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!"?

    - by Vi.
    I often need to login to multiple remote stations that are just placed to the same static IPs for me. SSH complains about changed keys in this case: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... Offending RSA key in /home/vi/.ssh/known_hosts:70 ... I usually just run vim /home/vi/.ssh/known_hosts +70, dd wq and re-run the SSH command. How to do it simpler? Requirements: The warning should be displayed, and not like this: The authenticity of host '172.1.2.3 (172.1.2.3)' can't be established. It is easy to accept the key change. I expect something like this: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... The fingerprint for the RSA key sent by the remote host is 82:cd:be:7a:ae:1b:91:2c:23:c1:74:4d:8a:38:10:32. Change the host key in /home/vi/.ssh/known_hosts (yes/no)? yes Warning: Changed host key for '172.1.2.3' (RSA) in the list of known hosts. [email protected]'s password: Simple and differs from usual "The authenticity of host can't be established." message.

    Read the article

  • ASA 5505 VPN setup. VPN works but still unable to reach devices in the inside network.

    - by chickenloop
    I've setup a Remote Access VPN on my Cisco ASA 5505. I'm able to connect to my ASA via my phone or the Cisco client, but I'm unable to reach devices in my inside LAN when connected via VPN. The setup is the following: Inside Network : 10.0.0.0/24 VPN_POOL: 172.16.0.0/24 Outside Network: 192.168.1.0/24 ASA is not the perimeter router, there is another device on the 192.168.1.0/24 network which is connected to my cable provider. Obviously UDP port 500 and 4500 are forwarded to the ASA's outside interface. Everything works perfectly, besides the VPN stuff. Config: interface Vlan1 nameif inside security-level 100 ip address 10.0.0.254 255.255.255.0 interface Vlan2 description Outside Interface nameif outside security-level 0 address 192.168.1.254 255.255.255.0 object network VPNPOOL subnet 172.16.0.0 255.255.255.0 object network INSIDE_LAN subnet 10.0.0.0 255.255.255.0 Then the exempt NAT rule. nat (inside,outside) source static INSIDE_LAN INSIDE_LAN destination static VPNPOOL VPNPOOL I don't think that the problem is with the VPN config, as I can successfully establish the VPN connection, but just in case I post it here: group-policy ZSOCA_ASA internal group-policy ZSOCA_ASA attributes vpn-tunnel-protocol ikev1 split-tunnel-policy tunnelspecified split-tunnel-network-list value Split-Tunnel default-domain value default.domain.invalid tunnel-group ZSOCA_ASA type remote-access tunnel-group ZSOCA_ASA general-attributes address-pool VPNPOOL default-group-policy ZSOCA_ASA tunnel-group ZSOCA_ASA ipsec-attributes ikev1 pre-shared-key ***** Any ideas are welcome. Regards.

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • How to copy VirtualBox VDI contents to a partition and dual boot the OS from it?

    - by Calmarius
    I'm a Linux user but I keep a compressed Windows XP ISO with me on a pen drive for the case I absolutely need Windows to do something. This works in VirtualBox most of the time. But now I want to play some games, so I would like to run the Windows image natively. My computer don't have CD drive so cannot just burn the ISO and make an install normally. What I trying to do is moving the installed Windows image to a physical NTFS partition on my HDD and set up GRUB to let me dual boot it. I found many tutorials that deal with making VDI to physical drive. But they assume I want to overwrite my entire drive. Moving the raw disk image with dd to the partition resulted in a corrupt partition. I also tried the VMDK trick to use that empty partition and install the Windows on it. Although the text mode phase of the installation finishes without problems, the VM won't work, either crashes and keeps rebooting or just immediately or freezes (depending on how I created the VMDK, with -rawdisk /dev/sda3 or -rawdisk /dev/sda -partition 3).

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >