Search Results

Search found 13727 results on 550 pages for 'target platform'.

Page 431/550 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • What's going on with traceroute?

    - by Kevin
    The following is what happens when I run traceroute from a certain location: # traceroute google.com traceroute to google.com (74.125.227.39), 30 hops max, 60 byte packets 1 gateway.local.enactpc.com (10.0.0.1) 0.138 ms 0.101 ms 0.084 ms 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * Absolutely nothing of interest... Now, originally I thought this was just a fact of the location's network set up. (I assume they block pings or something...) However, watch what happens when I use nmap to run a traceroute... # nmap -sP --traceroute google.com Starting Nmap 5.21 ( http://nmap.org ) at 2012-09-25 22:18 CDT Nmap scan report for google.com (74.125.227.40) Host is up (0.034s latency). Hostname google.com resolves to 11 IPs. Only scanned 74.125.227.40 rDNS record for 74.125.227.40: dfw06s06-in-f8.1e100.net TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 0.19 ms gateway.local.enactpc.com (10.0.0.1) 2 1.93 ms 99-20-92-1.lightspeed.austtx.sbcglobal.net (99.20.92.1) 3 25.61 ms 99-20-92-2.lightspeed.austtx.sbcglobal.net (99.20.92.2) 4 ... 6 7 23.68 ms 12.83.68.137 8 31.30 ms gar23.dlstx.ip.att.net (12.122.85.73) 9 ... 10 31.82 ms 72.14.233.65 11 32.27 ms 209.85.250.77 12 32.98 ms dfw06s06-in-f8.1e100.net (74.125.227.40) Nmap done: 1 IP address (1 host up) scanned in 3.29 seconds When using nmap I get A LOT more results than with traceroute, why? Note, I checked, and the difference in target IP addresses is not related...

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Installing 64-bit Ubuntu Server 12.04 LTS, on a VM with VMWare Player, on a 64-bit Windows 7 PC

    - by WannaBeAGeek
    I'm trying to create a VM, using VMWare Player, with an ISO image of Ubuntu Server 12.04 (LTS). The machine I'm doing the installation on has an Intel(R) Core(TM) i5 CPU, and runs 64-bit Windows 7 I managed to create the VM (gave username, password, configured network etc), but I can't install Ubuntu Server. First I get this alert : Binary translation is incompatible with long mode on this platform. Disabling long mode. Without long mode support, the virtual machine will not be able to run 64-bit code. For more details see http://vmware.com/info?id=152. When I click OK, I get another alert : This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible. This host supports Intel VT-x, but Intel VT-x is disabled. Intel VT-x might be disabled if it has been disabled in the BIOS/firmware settings or the host has not been power-cycled since changing this setting. (1) Verify that the BIOS/firmware settings enable Intel VT-x and disable 'trusted execution.' (2) Power-cycle the host if either of these BIOS/firmware settings have been changed. (3) Power-cycle the host if you have not done so since installing VMware Player. (4) Update the host's BIOS/firmware to the latest version. For more detailed information, see http://vmware.com/info?id=152. Then, when I click OK, my VM exists, and I get back to the VMWare Player home screen. I don't know much about hardware and virtualisation, so there might be some necessary info I'm not giving. Please don't hesitate to let me know what is missing in my post, for finding solutions. Thanks :)

    Read the article

  • Why is my hg workbench & Adobe Reader toolbar icon a blank white page?

    - by Zasurus
    On my work's PC (windows 7 32bit) all Adobe Reader and hg Workbench icons on the toolbar (and general shortcuts for hg workbench) are just white pages (with the top left corner folded over). If you right click on the toolbar icon then right click on "TortoiseHg Workbench"(for hg workbench) then "Properties" then the "Change Icon..." button is greyed out (same for "Adobe Reader X"). Also the "Target:" is "TortoiseHg 2.4.0 (x86)" and is greyed out also the "Open File Location" is greyed out. This is the same for both Hg Workbench and Adobe Reader and seems to be the only thing that connects them. I have tried reinstalling to no avail. I also have local admin rights on the PC. I finally I have tried the following following questions without any luck (they aren't quite relevant to my issue but close): Adobe Reader program icon and pdf icons don't appear in Windows 7 Adobe PDF icon disappeared? Encase my description isn't enough to understand the issue here is a screenshot. As you can see the Hg Workbench icon (far left) and a pdf open in Adobe Reader X (second icon to from the left) are both blank and the shortcut is mainly greyed out.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • Application Virtualization

    - by Peter Versnee
    I have been looking for a solution to virtualize my desktop application as a SaaS. After picking Citrix XenApp as the best solution for my case and trying for ever to get it to work (with some more experienced people) I can't help but to conclude that Citrix has too much problems for my case. Every time I fix one thing some other thing pops up. As I can't imagine Citrix XenApp is the only way to deliver my application to my clients, I'd like to ask you experienced guys if you think there is an other way to deliver it whilst meeting my requirements. I've got no money issues - time on the other hand... My requirements: My application must run on a windows server; The application should be virtualized with the application window only (no desktop); Content Redirection (my application is the only one virtualized, so PDF, DOC(X) etc. is on the users' system); Easy user management; SSO;* Cross platform (Windows, Linux, Mac OSX).* I truly hope I don't annoy you with another of the same question. I've tried to find the same question here and - obviously - didn't found it. *) these I can live without, however very much appreciated

    Read the article

  • How do I stop linux from trying to mount android phone as usb storage?

    - by user1160711
    When I plug in my Motorola Triumph to my fedora 17 linux box USB port, I get an endless series of errors on the linux box as it desperately attempts to mount the phone as a USB drive. Stuff like this: Jun 23 10:26:00 zooty kernel: [528926.714884] end_request: critical target error, dev sdg, sector 4 Jun 23 10:26:00 zooty kernel: [528926.715865] sd 16:0:0:1: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 23 10:26:00 zooty kernel: [528926.715869] sd 16:0:0:1: [sdg] Sense Key : Illegal Request [current] Jun 23 10:26:00 zooty kernel: [528926.715872] sd 16:0:0:1: [sdg] Add. Sense: Invalid field in cdb Jun 23 10:26:00 zooty kernel: [528926.715876] sd 16:0:0:1: [sdg] CDB: Read(10): 28 20 00 00 00 00 00 00 04 00 If I go ahead and tell the phone to allow linux to mount the USB storage, the messages stop, and I get a mounted drive, but if all I want to do is use the debug bridge, my log on linux will continue to fill with this junk. Is there some udev magic I can do to make the system ignore this particular device as far as usb storage goes? I just noticed that if I tell the phone to enable USB storage, let linux recognize the new disk, then tell the phone to disable USB storage again, I get one additional log message about capacity changing to zero, but the endless spew of messages stops, so I guess one work around is to enable and disable USB right away.

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network.

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either The Win 7 desktop or the VM's it alwats returns a destination host unreachable. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • How do I repartition an SDHC card in Windows?

    - by Peter Mortensen
    How do I repartition an SDHC card (4 GB or more)? Do I need third-part tools or Linux (a live CD solution would be OK)? In Windows' Disk Management the option Delete Partition is dimmed out: I can reformat the card as FAT32, copy files to and from the card and even change the file system to NTFS using the command line command CONVERT, but not repartition it. The article How to Partition an SD Card in Windows XP talks about using "a Windows enabler program" which sound rather dubious to me. I have tried to change from “Optimize for quick removal” to “Optimize for performance”. The option to format as NTFS appeared, but the Delete Partition option is still dimmed out. Platform: Windows XP 64-bit SD card reader: USB 2.0 device, LogiLink® CR0005C Cardreader 3,5' USB 2.0 intern 54-in-1 mit USB Front Kingston 16 GB SDHC card, speed class 4. (It could be formatted as FAT32 and successfully used in a 4 GB ReadyBoost setup (Windows 7).) I have also tried on different versions of Windows and with different cards with the same result: Kingston 4 GB SDHC card, speed class 4 (the one shown in the screenshot) Transcend 2 GB (not marked as SDHC, but SD) Windows 7 32-bit (albeit with a somewhat an older card reader) and Windows XP 32-bit on an EliteBook 8730w

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • PST backup with Volume Shadow Copy Service

    - by NoMadMan
    I was asked to implement the task of backing up 35 PST files ranging from 800Mb to 2000Mb. Windows XP and Windows 2000 workstations are assigned to the users and we have a Windows 2000 domain controller we use to back up files on 3x 500Gb external hard drives. I found several methods from applications to scripts. Local or remote applications would be my last resort. I came across this script based on Volume Shadow Copy Service. CopyWithVss I wanted to know if there would be a problem if the path had spaces. Would mounting the destination path of each PST folder with a drive letter be more practical? My concern with mounting option is that i would eventually run out of letters since I have 35 and possibly more workstations to back up. Lastly, can someone give me an example of CopyWithVss if it were run on a production network? The script is a bit cryptic even after reading several times. Where in the script do I enter the source and the destination? I'm a Mac user so please excuse my ignorance with Windows platform.

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • unable to set xmx beyond 4gb on system having 8gb RAM

    - by Arun
    I need to set ANT_OPTS=-Xms1024m -Xmx6144m -XX:PermSize=1024m -XX:MaxPermSize=1024m JAVA_OPTS=-Xms1024m -Xmx6144m -XX:PermSize=1024m -XX:MaxPermSize=1024m I have a system with 8gb(recently upgraded from 4 gb) But once i set the ant opts to above said value I am not able to run any of my ant targets and I get the following error [ERROR] Argument error: -Xmx6144m [ERROR] Specified maximum heap size (6144 MB) is larger than the address space on this platform (4 GB). [WARN ] -XX:PermSize=1024m is not a valid VM option. Ignoring [WARN ] -XX:MaxPermSize=1024m is not a valid VM option. Ignoring Could not create the Java virtual machine. This indicates the Java that I have on my system java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Oracle JRockit(R) (build R28.1.0-123-138454-1.6.0_20-20101014-1351-windows-x86_64, compiled mode) and I am running a Windows 7 on Intel Core 2 duo 3Ghz processor and 8gb RAM Any pointers in solving this issue would be of great help. PS: I did google for the error and it was one of my 1st such occurence where I didnot get any links pointing to the specific solution. Maybe no one has encountered such a scenario

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >