Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 561/945 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Top causes of slow ssh logins

    - by Peter Lyons
    I'd love for one of you smart and helpful folks to post a list of common causes of delays during an ssh login. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays. Between issuing the ssh command and getting a login prompt and between entering the passphrase and having the shell load Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings. What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?

    Read the article

  • How to fix Ubuntu 10.10 black screen from terminal?

    - by none
    I'm trying to install Ubuntu Desktop 10.10 on an Intel Atom mainboard (Intel D945GCLF2) with CRT that has been running Ubuntu 9.x previously. Both, Desktop live CD / installer and alternate install CD cause the screen to go black (and the status LED blinks). I was able to get a bit further into the boot process with nomodeset as parameter with the Live CD, unfortunately I can't pass GRUB any parameters now that I have used the alternate Install CD by pressing 'e', it just boots. So now I have Ubuntu installed, I get a terminal with CTRL-ALT-F1 but I don't know what I need to do now or how to adjust resolution or video settings from command line.

    Read the article

  • "Cannot allocate memory " error whle copying data from window to ubuntu

    - by John
    I have Ubuntu 9.10 installed inside VM of server 2008. WHen i try to copy the data from the network and paste insid ethe Ubuntu it says error called "Cannot allocate memory " I have 3GB RAM attached to the Ubuntu I tried above suggestion but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem???? I tried Set the following registry key to ’1': HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache and set the following registry key to ’3': HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem????

    Read the article

  • What is your approach to draw a representation of your network ?

    - by Kartoch
    Hello, I'm looking to the community to see how people are drawing their networks, i.e. using symbols to represent complex topology. You can have hardware approach, where every hardware unit are represented. You can also have "entity" approach, where each "service" is shown. Both are interesting but it is difficult to have both on the same schema (but this is needed, especially using virtualization environment). Furthermore, it is difficult to have complex informations on such representation. For instance security parameters (encrypted link, need for authentication) or specific details (protocol type, ports, encapsulation). So my question is: where your are drawing a representation of your network, what is your approach ? Are you using methodology and/or specific softwares ? What is your recommendations for information to put (or not) ? How to deal with the complexity when the network becomes large and/or you want to put a lot of information on it ? Examples and links to good references will be appreciated.

    Read the article

  • Is there any trick to join and use Windows 8/8.1 with Samba 4 (4.1.6)?

    - by tenshimsm
    It seems that Samba doesn't like at all. I've followed various tutorials and I can't get Windows 8 to work properly with a Ubuntu Server as domain controller. This week i've downloaded ubuntu 14.04 lts and set a fast domain configuration. As usual all other Windows version (XP and 7) work but the newest M$ nightmare doesn't. In this try it doesn't even join the domain, keeps saying the my username or password are wrong. My /etc/samba/smb.conf # Global parameters [global] workgroup = DOMAIN realm = DOMAIN.LAN netbios name = DOM server role = active directory domain controller dns forwarder = 8.8.8.8 idmap_ldb:use rfc2307 = yes [netlogon] path = /var/lib/samba/sysvol/domain.lan/scripts read only = No [sysvol] path = /var/lib/samba/sysvol read only = No [test] directory mode = 0750 path = /SHARES/test read only = no Does anyone have a tutorial that really works? Because I've tried many, each one with different configurations that works only with the people that made them. And is there a way to import my old AD users, computers and ID in a way that I won't need to rejoin all computers?

    Read the article

  • re-direct SSL pages using header statement based on port

    - by bob's your brother
    I found this in the header.php file of a e-commerce site. Is this better done in a .htaccess file. Also what would happen to any post parameters that get caught in the header statement. // flip between secure and non-secure pages $uri = $_SERVER['REQUEST_URI']; // move to secure SSL pages if required if (substr($uri,1,12) == "registration") { if($_SERVER['SERVER_PORT'] != 443) { header("HTTP/1.1 301 Moved Permanently"); header("Location: https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']); exit(); } } // otherwise us regular non-SSL pages else { if($_SERVER['SERVER_PORT'] == 443) { header("HTTP/1.1 301 Moved Permanently"); header("Location: http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']); exit(); } }

    Read the article

  • Ways to improve completeness of files for data recovery and scanning?

    - by SteveO
    I am using R-studio for data recovery on one of my ntfs partition. There is a pdf file about 16MB, but the software can only recover 15MB of it. So I am thinking about what ways can be used to improve the quality of scanning and recovery by the software? I am looking around its preferences. I am not quite sure whether there are some adjustable parameters for scanning and recovery which can be fine-tuned to improve the quality? R-studio has a free demo version, for which scanning is free,but recovery isn't. It is downloadable from http://www.data-recovery-software.net/Data_Recovery_Download.shtml Its manual is here http://www.r-tt.com/downloads/Recovery_Manual.pdf. I have tried my best to search for answers in the manual, but failed to find one. Their technical support is not as good as their software, and helpless usually in my opinion. Thanks!

    Read the article

  • Suhosin per-URL exceptions?

    - by STATUS_ACCESS_DENIED
    I am using SimpleID as my OpenID provider and it turns out that if I log on via pages like those on StackExchange, one of the parameters of the GET request gets dropped by Suhosin. The name of the variable is s and I presume it's responsible for the "return to URL" part after login. All of this is not a problem as long as I am already logged into SimpleID from before. However, as soon as the site on which I want to log in via OpenID ends up at the login screen of SimpleID, the redirect back to the site I came from does not work anymore due to the dropped variable. Is there a method to configure either on a per-virtual-host or per-URL basis to ignore the maximum length for GET requests with a parameter s exceeding the (globally) set limit? I'm using Apache 2.2, so I was wondering whether a mechanism similar to setting the PHP ini variables from within the server configuration exists for Suhosin.

    Read the article

  • Having two FTP ports for the user

    - by user1663896
    I'm running vsftpd on RedHat 6.4 using TLS/SSL on port 990. It works great. I have been tasked to have my VSFTPD server running on unencrypted port 21 as well. This gives my users to either use clear text FTP on port 21 or TLS/SSL on port 990. I have tried the following in my vsftpd.conf file and did not work. listen_port=990 listen_port=21 In my config file it has the following SSL parameters: chroot_local_user=YES ssl_enable=YES allow_anon_ssl=NO anonymous_enable=NO anon_world_readable_only=NO force_local_data_ssl=NO force_local_logins_ssl=NO require_ssl_reuse=NO Can VSFTPD run on port 21 and 990? Thanks in advanced.

    Read the article

  • Error 800 while connecting to VPN

    - by Aamir
    I am trying to connect to my office network through VPN. It used to work fine earlier but now as soon as I hit connect, I get Error 800: Error 800: Unable to establish VPN Connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. I am using Windows XP and I am able to ping the VPN server successfully. I have Symantec Endpoint Protection installed (if it matters). I have tried disabling it as well but nothing changes.

    Read the article

  • Kernel compiling with -j2+ parameter ends prematurely with no error message or output bzImage

    - by Minix
    I've noticed quite a while ago that compiling a kernel with the parameter -j set to 1 or more doesn't produce a bzImage. Instead, it ends prematurely without any advice. I have reproduced the same behavior in both my netbook and home server. As far as I'm aware, the point where the compilation stops is random - Compiling twice with the same parameters will probably stop at different files. However, when I run make with no -j* parameter the compilation ends just fine and outputs a working bzImage. Both machines run Intel Atom (N270 on the netbook and 330 on the server) and I've compiled for these processors. If I recall correctly, I've tried compiling both with Atom and with generic x86_64 options. The kernel version I'm building is 2.6.34.1 I've always compiled normally with those options in my Core2Duo and Pentium Dual Core machines. Has anyone experienced this issue? Any ideas why does this happens? Is there a fix or workaround?

    Read the article

  • Java and Sendmail HELO requires domain address

    - by ealgestorm
    I am trying to set up emailing from a java web application hosted on a linux server (Cent OS) in apache. Sendmail is working fine from the command line as root on localhost but when trying to send emails from the java web app (also on the same server from localhost) the following java exception is thrown. 501 5.0.0 HELO requires domain address EDIT: I have read that some people have found this is due to an incorrect hosts entry currently the hosts file contains 127.0.0.1 Centos-VPS localhost.localdomain localhost and i'm not sure what the Centos-VPS bit at the start is for but this is a clients hosted server so don't really want to break stuff EDIT see the RFC is helpful ... 501 Syntax error in parameters or arguments Now I know what the problem is! (note the sarcasm people.)

    Read the article

  • WGet a Page that Requires Logging in

    - by Synetech inc.
    I’m trying to figure out a way to use WGET or a similar tool so that I can schedule a web page to be downloaded regularly as a sort of updating log. The problem is that the page requires that I be logged in otherwise I get a different page, generic. Further, the page does not take login information as GET parameters in the URL, it uses POST to log in on the login page and cookies to save the login information that’s read by the regular page. I’m currently using GNU Wget 1.10.2 for Windows. I’ve tried using WGET’s cookie functionality but have had mixed results, usually skewing towards it not working. Can anyone please advise on a way to accomplish this? Thanks a lot.

    Read the article

  • Fastest security check of file tree on NFS

    - by fungs
    I am currently experiencing very bad performance using the following on an NFS network folder: time find . | while read f; do test -L "$f" && f=$(readlink -m $f); grp="$(stat -c %G $f)"; perm="$(stat -c %A $f)"; done Question 1) Within the loop permissions are checked using the variables grp and perm. Is there a way to lower the amount of disc I/O for these kind of checks over the network (e.g. read all meta data at once using find)? Question 2) It seems like the NFS isn't tuned very well, the same operation on a similar network link via SSHFS take only one third of the time. All parameters are auto-negotiated. Any suggestions?

    Read the article

  • why run a Linux shell command with &?

    - by George2
    I am using Red Hat Linux Enterprise version 5. Sometimes I notice people run command with a couple of & options. For example, in the below command, there are two &-signs. What are the function of them? Are they always used together with nohup? nohup foo.sh <parameters to specific the scripe> >& <log_file_name> & thanks in advance, George

    Read the article

  • Setting Boot and Mirror Disks correctly at the Solaris OBP

    - by Shaun Dewberry
    I am recovering a domain that was lost due to power outage on an Sun Fire E25K server. I know how to set the appropriate parameters at the openboot prompt using nvalias/devalias, boot etc. However, I do not understand how one gets from the output of show-disks {1a0} ok show-disks a) /pci@1dd,600000/SUNW,qlc@1/fp@0,0/disk b) /pci@1dd,700000/SUNW,qlc@1/fp@0,0/disk c) /pci@1dc,700000/pci@1/pci@1/scsi@2,1/disk d) /pci@1dc,700000/pci@1/pci@1/scsi@2/disk e) /pci@1bd,600000/SUNW,qlc@1/fp@0,0/disk f) /pci@1bd,700000/SUNW,qlc@1/fp@0,0/disk g) /pci@1bc,700000/pci@1/pci@1/scsi@2,1/disk h) /pci@1bc,700000/pci@1/pci@1/scsi@2/disk q) NO SELECTION Enter Selection, q to quit: to the correct full disk path. I know it is basically one of the pci/scsi paths listed above, but in all instruction or examples a string of additional characters is appended to the path to specify Targets and Units but the explanation of the path construction is never given. Could someone please explain how to construct this disk path correctly?

    Read the article

  • Virtualise Excel in a browser

    - by Macros
    Is it possible to give users access to a virtualised instance of Excel - I don't want to give them access to a full OS (although this will clearly be running in the background, all they can access is Excel - they don't even see any other screens)? Secondly, if it is possible, is it possible to do within a browser? Edit I am building a system which is designed to test candidates skills in Excel and for this reason needs to use the full desktop version and not a web app. I don't want to have to ensure Excel is installed on the client machine as there will be issues around differing versions and security as the workbook(s) that are used in the test use VBA extensively to customise and mark the exercises. Ideally my web app would be able to open a session to the server which then just puts the user into an instance of Excel without ever seeing a desktop. I would also need to be able to pass in command line parameters in order to define which workbook to open and also pass in a unique token to identify the user

    Read the article

  • How to create an rpm without a build step

    - by infra.user
    I'm trying to create an rpm of some code which doesn't need to be built. It will just need to run a script when it's installed on the destination system (i.e. I just need the %install portion of the spec file). I've left both %build and %configure sections of my rpm spec file empty, yet rpmbuild continues to try and execute ./configure with a bunch of parameters. Does anyone know how I can have rpmbuild create the rpm without trying to run ./configure? Thanks.

    Read the article

  • Speedup of fixing an openssl bug with 8192 bit key [on hold]

    - by rubo77
    This is related to this Bug-Report https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747453 OpenSSL contains a set of arbitrary limitations on the size of accepted key parameters that make unrelated software fail to establish secure connections. The problem was found while debugging a XMPP s2s connection issue where two servers with long certificate keys (8192 Bit RSA) failed to establish a secure connection because OpenSSL rejected the handshake. This seems to be a small problem to be fixed but although there is an easy patch available to fix the issue in that bug report, no reactions are noticed so far.. The last patch that broke the 2048 barrier took 2 years to be implemented and only resulted in an increase to 4096bit, which seems to be a bad joke. Where would we have to report this to speed up the implementation for such an issue?

    Read the article

  • Skydrive unable to create directory

    - by blam3161
    I have upgraded a desktop PC from windows 8 pro to 8.1 pro. I 8, I had skydrive/desktop & skydrive/MUI. In 8.1, file syncing seems to be ok (as far as I could test it). By the way, I could not start the skydrive MUI app. It says "unable to display files on this PC". IN addition, I cannot turn the "offline file access" on in parameters tab. The option stays greyed. Is there away to fix it ? Thanks.

    Read the article

  • How to extend a file definition from an existing module in the node?

    - by c33s
    I use an older version of the example42 mysql module, which defines the mysql.conf file but not its content. Mmy goal is to just include the mysql module and add a content definition in the node. class mysql { ... file { "mysql.conf": path => "${mysql::params::configfile}", mode => "${mysql::params::configfile_mode}", owner => "${mysql::params::configfile_owner}", group => "${mysql::params::configfile_group}", ensure => present, require => Package["mysql"], notify => Service["mysql"], } ... } node xyz { include mysql File["mysql.conf"] { content => template("mymodule/mysql.conf.erb")} } The above code produces a "Only subclasses can override parameters" What is the correct way to just add a content definition to an existing file definition?

    Read the article

  • Lighttpd Rewrites & Blank page

    - by Stathis
    Hello, I have configured some lighttpd rewrites, of which one does not work. This is the line that does not work as it should and causes a white (blank) page to be thrown: url.rewrite-once = ( ... "^/search/([^\/]+)*/([^\/]+)*/([0-9]+)$" => "search.php?t=$1&k=$2&p=$3", ... ); Also note that it is the only one with 3 parameters, all the rest in the section have 0-2. I found this error in the lighttpd error.log: 2011-01-07 17:13:09: (mod_rewrite.c.374) execution error while matching: -8 Can someone help? Thanks.

    Read the article

  • In terms of load handling, which is better: one server or two of equivalent power?

    - by seldary
    My goal is to figure out if i'm better off with one strong server, or multiple weaker servers with a load balancer. Does the fact of splitting the load between servers have an effect on the total load my website could take? It's hard to single that out, because there are of course a lot of parameters that affect the results, so some assumptions: Putting failover considerations aside - I know it matters, but for the sake of the question's simplicity, lets assume nothing fails. The servers in the multiple servers option have an accumulated "power" equivalent to the one server option (about the same amount of cores and RAM space). If that is too theoretical, here is a concrete question that could help: Suppose I have several instances of exactly the same server - lets call it S. Suppose that server S can serve a load of up to X calls per time unit. Will two S servers with a load balancer serve 2X calls per time unit? significantly more? significantly less?

    Read the article

  • We have a Solaris 9 server running Oracle 10G and have been getting memory consumption errors for a few weeks now

    - by another_netadmin
    We recently upgraded our Enterprise application and everything worked ok until one weekend when we did a server reboot, ever since then we have run into memory errors. The server has 4GB of physical memory installed and the kernel parameters are set to the following (/etc/system). I'm not an Oracle guy so I'm not sure where to start looking but any informaiton is greatly appreciated. Thanks in advance. There are two databases running on this server, one is a production database and the other is a pre-production database. [root@bandb /]# cat /etc/system | grep seminfo set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=400 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 [root@bandb /]# cat /etc/system | grep shminfo set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 [root@bandb /]#

    Read the article

  • What is a realistic average time difference between servers in the same LAN?

    - by monster
    Until recently, we had at work a small cluster of about 20 small Windows servers (which have now all been virtualized). They were all configured to synchronize with the local time server. It was on an 1Gb sub-network in our own DC. I never got them to be less than about 100ms away from each other, which I consider to be an incredibly big difference. Is that a normal value? What is a realistic expectation of time difference between machines running on a 1Gb network, and all connected to the same time server, and updating frequently, say every 5 minutes? I would like to know this as setting timeouts and other parameters in a distributed application requires to take that difference into consideration.

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >