Search Results

Search found 13104 results on 525 pages for 'malcolm box'.

Page 424/525 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Compiz & Linux compositing: how does it fit into the X architecture?

    - by Latanius
    Not a really "how to solve stuff" question, but... I was wondering how the modern X architecture works, with compiz & all. What I know about it: in the beginning, there was the X server, clients connected (presumably on TCP), and then sent messages to the server to instruct it to show windows etc. because this didn't work (at all? or just fast enough?) for OpenGL & 3D acceleration, additional APIs were created for direct rendering (DRI? and, in addition to the X server, what things did the X clients talk to to render stuff and through what interfaces?) and, finally, enter Compiz: X clients end up (somehow) rendering to OpenGL textures, which is then put together to form a fancy-looking screen with translucent windows, and rendered to the screen. What I'm especially interested in is what components does the system have and how do they connect to each other? Like... if there is a box labelled "compiz" in the system... is it inside the X server? If it's not, how do the rendered images from the apps end up in it? And where does it render to? Is that another X server? Or DRI? Of course, I'd be equally happy if pointed to some docs capable of clearing up the confusion described above (conditional on they being significantly shorter than book-sized entities).

    Read the article

  • Which scripting language to use to asynchronously ssh into equipment, run several commands, parse the output, and save to a file on my computer?

    - by Fujin
    There are several points I'd like to stress in my question. I'd like to login by asynchronously ssh'ing into our infrastructure equipment. Meaning, I do not want to connect to only one device, do all the tasks I need, disconnect, then connect to the next device. I want to connect to several devices at once in order to make the process as fast as possible. By equipment I mean 'infrastructure equipment' and not servers. I say this because I will not have the luxury of saving files to the device then transferring them to myself with scp or another method. The output of the scripts that are run will have to be saved directly to my computer. The output of the commands that are run will need to be cleaned up and parsed. Also I want the outputs of each device to be combined into one nice and neat file, not a separate file for each device. This will all be done from a linux box, using ssh, into devices that all use linux'ish proprietary OSes. My guess is the answer to my question will either be a Bash, Perl, or Python script but I figured it wouldn't hurt to ask and to hear the reasons why one way is better than another. Thanks everyone. EXTRA CREDIT: With you answer, include links to resources that will help create the script I described in the language that you suggested.

    Read the article

  • All USB ports on my laptop are dead, any options via Ethernet, SD/MMC or HDMI?

    - by carbontracking
    My son's laptop has taken alot of pain in his school over the last few months and he and his buddies have succeeded in breaking both USB ports. I've opened the box, unsoldered the USB ports, replaced them by new components but no joy - the ports seem dead. If I assume that the insertion of LEGO pieces, etc. in USB ports has rendered them unsalvageable, do I have any other options for restoring USB access to the laptop? The laptop has an ethernet port, a HDMI port and an SD/MMC port. I've trawled the web for a magic adadpter, i.e; ethernet=USB, HDMI=USB or SD/MMC=USB but to no avail. Lots of options for going the other way though. Does anyone have any ideas on the feasibility of an ethernet=USB cable? Ethernet doesn't seem to have +5V or GND so I can run a cable from the motherboard that could provide those. Amazing how many functions of a laptop just disappear when you have no USB ports.

    Read the article

  • Easiest way to find out if user has either Windows 7 or Vista (through telephone support)?

    - by Rabarberski
    If you have to provide some initial troubleshooting support by phone [or email], and you don't have access to the PC itself, what is the easiest and most foolproof question to find out if the 'dumb' user is using either Windows 7 or Windows Vista? For example: determining if the user has either Windows XP or Windows Vista/7 is easy. Just ask the user if the button at the left bottom corner is (a) either square with the word 'Start' on it, or (b) it is a round button. But how to determine the difference between Vista and 7? Edit: For all the existing answers the user has to type something, and do it correctly. Sometimes even that is already hard for a computer illiterate user. My XP example just requires looking. If it exists (although I am afraid it doesn't), I think a solution that is just based on something this is visually different between Vista and 7 would stand above all others. (Which makes Dan's suggestion to turn over the box and look at the label" not so stupid). Perhaps the small 'show desktop' rectangle at the right side of the task bar (was that present in Vista)?

    Read the article

  • IPtables and Remote Desktop with Proxy

    - by Sebastian
    So I setup a windows 2008 web server R2 on VirtualBox. Currently using Bridged Network. I can remote desktop to the machine hosting the VM (10.0.0.183) but cannot remote desktop to the VM itself (10.0.0.195). The remote port on the VM set to 5003. VM setup to accept remote connections (windows side). We also use a proxy for our internet, and I added these rules under NAT. (centOS 5) on our proxy box. -A INPUT -p tcp --dport 3389 -j ACCEPT -A REROUTING -i ppp0 -p tcp --dport 3389 -j REDIRECT --to-port 5003 -A FORWARD -d 10.0.0.195 --dport 5003 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I've been trying for hours and hours and just cannot get it to work. I also used freedns so that we can use a domain name to connect too this VM over the internet. (the DNS points to our external IP address). If we don't get this right we will have to purchase a PPoE from an ISP to connect to this VM remotely, but I know that there is an alternative route if I can just get this port forwarding right!

    Read the article

  • Is it possible to recover from the Windows 8 refresh feature?

    - by Warren P
    The intention of the Windows 8 RTM (released version) Refresh feature is to restore the system to the way it was when I first installed. It didn't though. Almost everything that came in the start-screen (not a menu any more) is gone, not just third party apps I installed, but EVERYTHING other than the icon for internet explorer, and the icon for the store, and the desktop, were wiped. Out of the box Windows 8 had a pretty large list of things installed, and it seems that the Refresh feature wipes all of them out. Is it possible to really get the system back to a fresh install state, or should I just re-install from the DVD I made? (I have access to Windows 8 RTM, legally through the MS Action Pack subscription.) I suspect that if I create a new account, I might get a new desktop with a default set of icons, but I'm hoping it might be possible to do this without using a different login. The second problem with Windows 8 refresh is it seems to trash the ACL's on my C: drive, taking all permissions away for access to the C: drive, making nothing on it visible or readable to my primary and only login account. I believe it might be possible to undo the damage done by the "refresh" with some judicious use of icacls from the command prompt.

    Read the article

  • Windows Vista Context Menu>New... does not find entries

    - by Paul
    I was trying to remove a virus and foolishly did not backup registry keys I deleted because I (thought) I only deleted entries from the folders of programs I did not care about. However, I think I have done something wrong here: Now when I open a context menu (right click) in any location and hover over the "New..." option I don't get any options. It has a greyed out box saying "(Empty)". So far I have found out the the entries themselves are still there (using the locations provided here: Windows 7 - Add an item to 'new' context menu). I have also used a program recommended in that thread which also finds the entries intact and enabled. So it seems maybe I have deleted the entry which tells Vista where to look to find the files that can be created. How can I restore this so entries are shown again? I know system restore is an option but as I have said I did this when removing a (very stubborn) virus so that is the last resort.

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • Windows 7, network transmit (send) not working

    - by user326287
    My Win 7 works 2 years without problem. But now, I can't transmit (send) big data on LAN/Internet. I can: - Ping anything - Browse Internet, download files at full speed - Send e-mails with very small attachments. - Testing download speed on Speedtest.net, measure stable full speed. I can't: - Testing upload speed on Speedtest.net. Upload stuck.. - Save/send email messages with big (128k) attachment, independent from e-mail provider or e-mail box. THIS IS NOT A HARDWARE/CABLE/CARD OR OTHER NETWORK DEVICES PROBLEM! When I boot from a Linux Live CD, without ANY hardware change, all data sending, testing works correctly, at full speed. I have tried already in Win 7: - Disable Windows/3rd party Firewall completely - Reset IP stack parameters (netsh int ip reset c:\resetlog.txt) - Computer restore - Reinstall LAN driver When I inspect the packets in Wireshark in Windows, I see lot's of (maybe 60% of sent packets) "TCP Retransmission". Sometimes receive "TCP Dup Ack" or "TCP Out-of-Order". Linux don't do this. Thank you for the help.

    Read the article

  • how can a web page change my mouse speed?

    - by Tekaholic
    I usually have many tabs open in Firefox and I haven't been able to find one specific website that causes this because I don't seem to notice it right away. I'm going to click on something on my desktop and I am lifting up the mouse several times to get across the screen. It doesn't seem to matter what program I might be using because this happens on all desktops and in Firefox, too. So I go in my settings and I turn up the mouse speed all the way and it's still not really acceptable. It doesn't matter if I click on different tabs but when I close the browser, my mouse is way too sensitive, like I'd expect at the max setting. Then I go back to Control Center and return my mouse speed and acceleration to normal. When I restart my browser, the mouse remains normal. So is there something to this before I start wasting my time hunting through my history to discover which website or sites are having this effect? ...and if it is a specific site and I locate it, what can I change to stop it's effect on my mouse besides not visiting it? I am using Linux Mint 13 on a box with an AMD Athlon processor and 2gigs of ram. I never installed another browser because everything works for me.

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • Adding user groups from a remote domain server to permissions of a remote desktop terminal server fails. why?

    - by doveyg
    I have 3 computers, two of which are servers running Windows Server 2008 and another running Windows 7. One of the servers has the following roles installed; Active Directory, DHCP and DNS. The other server has a Terminal Server role installed. I am trying to log-on to the Terminal Server via Remote Desktop using the Windows 7 machine with credentials from the Active Directory server. Sounds simple enough, right? Well, no. Whenever I try to add users or groups from the Active Directory Domain server to the Terminal Server's permissions for RDP it seems to ignore, or forget, them. Though the various methods I was able to find it either adds a strange sting of numbers after the user group or the logo to the left has a question mark on it, reopening the dialogue box replaces the user group with the name of the Domain. I am confident I have the Domain setup correctly as I am able to log-on to users in the Active Directory from other computers I have put in the Domain, and when I attempt to browse the user objects from the Domain I am prompted with a username/password field and am able to view the structure of Active Directory objects. Please advise.

    Read the article

  • Executing a git command using remote powershell results in a NativeCommmandError

    - by user204777
    I am getting an error while executing a remote PowerShell script. From my local machine I am running a PowerShell script that uses Invoke-Command to cd into a directory on a remote Amazon Windows Server instance, and a subsequent Invoke-Command to execute script that lives on that server instance. The script on the server is trying to git clone a repository from GitHub. I can successfully do things in the server script like "ls" or even "git --version". However git clone, git pull, etc. result in the following error: Cloning into 'MyRepo'... + CategoryInfo : NotSpecified: (Cloning into 'MyRepo'...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError This is my first time using PowerShell or a Windows Server. Can anyone provide some direction on this problem. The client script: $s = new-pssession -computername $server -credential $user invoke-command -session $s -scriptblock { cd C:\Repos; ls } invoke-command -session $s -scriptblock { param ($repo, $branch) & '.\clone.ps1' -repository $repo -branch $branch} -ArgumentList $repository, $branch exit-pssession The server script: param([string]$repository = "repository", [string]$branch = "branch") git --version start-process -FilePath git -ArgumentList ("clone", "-b $branch https://github.com/MyGithub/$repository.git") -Wait I've changed the server script to use start process and it is no longer throwing the exception. It creates the new repository directory and the .git directory but doesn't write any of the files from the github repository. This smells like a permissions issue. Once again invoking the script manually (remote desktop into the amazon box and execute it from powershell) works like a charm.

    Read the article

  • Users Password does not reset after successful login at the console but works fine with SSH

    - by jnbbender
    The title says it all. I have my unsuccessful login attempts set to three. I purposefully fail logging in 2x, then when I SSH into the box successfully the 3rd time my count drops back to zero; exactly what should happen. But at the console I get failed login attempts EVEN for my successful login attempts. I am using RHEL 5.6 and no I am not able to upgrade. Here is my system-auth file: auth required pam_env.so auth required pam_tally.so onerr=fail deny=3 per_user auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account required pam_tally.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so session optional pam_keyinit.co revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so I have tried adding reset after and in place of per_user in the auth required pam_tally.so field. Nothing seems to work and I don't know why SSH is working just fine. Any ideas?

    Read the article

  • PPTP VPN on Server 2008 Enterprise

    - by Mike K
    I asked this question on Server fault and was told that was not allowed so im moving it here. I am running Windows Server 2008 enterprise in my HOME network inside of vmware workstation. I am running this on my home network to setup a PPTP VPN connection at home. I have correctly setup everything I needed to make it work, including opening all the ports, 1723 and 43 (GRE). I am able to connect just fine, but when I connect I dont have internet unless I uncheck use remote gateway. The thing is, I want to use the remote gateway to route all my traffic through that connection. Can someone tell me why this isnt working and how to get it to work. When I have remote gateway checked, and I do an ipconfig I dont get a remote gateway for the VPN connection, its 0.0.0.0 when id assume if connected properly should be 192.168.1.254 (my ATT Home Router). Also, if I cant get the remote gateway issue to work, and I have to uncheck that box to get internet, does this mean my VPN session is no longer encrypted? I am fully aware the PPTP VPN is the weakest VPN encryption out there but still having that extra layer of security when im on an unsecure wifi connection makes me feel a bit better. Thank you for all your help in advance. Someone told me I need to setup a gateway or router configured on the server. If thats the case, how go I go about telling the remote co

    Read the article

  • Nginx vhost configuration

    - by user101494
    I am attempting to setup a new server with Nginx 1.0.10 on debian 6. The config below works perfectly on a server with nginx 0.8.36 on Ubuntu 10.04.3 but not on the new box. The desired result is to: Redirect non-www request on the tld to www, but not not subdomains Use the the folder structure /var/www/[domain]/htdocs /var/www/[domain]/subdomains/[subdomain]/htdocs Serve files any host for which files exist in this structure On the new server domains are matching correctly but subdomains are matching to /var/www/[subdomain].[domain]/htdocs not /var/www/[domain]/subdomains/[subdomain]/htdocs server { listen 80; server_name _________ ~^[^.]+\.[^.]+$; rewrite ^(.*)$ $scheme://www.$host$1 permanent; } server { listen 80; server_name _ ~^www\.(?<domain>.+)$; server_name_in_redirect off; location / { root /var/www/$domain/htdocs; index index.html index.htm index.php; fastcgi_index index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ /\.ht { deny all; } } server { listen 80; server_name __ ~^(?<subdomain>\.)?(?<domain>.+)$$; server_name_in_redirect off; location / { root /var/www/$domain/subdomains/$subdomain/htdocs; index index.html index.htm index.php; fastcgi_index index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ /\.ht { deny all; } }

    Read the article

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

  • Private subnet for VM server host-only network

    - by Derek Pressnall
    At my current job, we distribute a product based on a Linux server with multiple VMs defined (using KVM / libvirt). We are planning to expose limited ports to the customer's network, and use iptables to direct inbound traffic to the appropriate internal VM. My question: is there a class of private subnets that I can use for the internal host-only network that is least likely to conflict with a client IP subnet? Specifically, if I choose a /24 out of any of the RFC-1918 defined private subnets (such as 192.168.x.x), there is a chance of conflicting with a customer-used range. I noticed that several current VM implementations default to 192.168.122.x -- is this due to an RFC that I'm not familiar with, and therefore this is a safe range to use (that most network admins would avoid)? Or did the various VM vendors just pick that range randomly? I guess I'm looking for an IP range that is more private than the existing private (RFC1918) addresses. The only other thought I had was to use one of the "Test Net" IP ranges reserved for documentation purposes (RFC 5737). Note, that I'm not worried about a customer's network blocking these IPs, as this is only internal to our server (packets get NATted before leaving the box). However this does seem more unorthodox than just sticking with the default 192.168.122.x/24 subnet.

    Read the article

  • where are the default ulimit values set? (linux, centos)

    - by nomercysir
    I have two CentOS (5) servers with nearly identical specs. When I login and do: ulimit -u on one machine, I get unlimited and on the other, 77824 When I run a cron like * * * * * ulimit -u > ulimit.txt I get the same results (unlimited, 77824). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles (.bashrc, /etc/profile, etc .. these wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024, which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT) EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf: * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the 'nofile' limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

  • Linux RHEL : Making disk image efficiently

    - by TheProfoundGeek
    I have a linux box having RHEL. Its disk (hda1) is having free space of about 25GB. I have an another disk (hda2) which is of 250GB having another RHEL instance, it's partitioned for 200GB. Data on the disk occupies about 21GB of data. The image of hda2 needs to be taken and restored on other disk of same specs. What is the best way to make image file of the hda2? Ideally the images size should be around 25GBs as the actual data on the disk is just 21GB. I am aware about the following two methods. Method 1 : Raw Image dd if=/dev/hda2 of=/path/to/image dd if=/path/to/image of=/dev/hda3 Question 1 : Will the above method make a gigantic image of 250GBs? Is it efficient? Method 2 : Compressed Image. dd if=/dev/hda2 | gzip > /path/to/image.gz gzip -dc /path/to/image.gz | dd of=/dev/hda2 Question 2 : I tried the method 2, its taking too long. What are the pit falls of this methods? Which of the above method id efficient and why? Is there any other Linux utility which can do the job? Third party tools are no no.

    Read the article

  • NFS server hangs after 3 minutes

    - by John P
    I have a VPS running Centos 6.3 with a fully updated NFS. When I mount the NFS directory from the client, everything works perfectly fine for approximately 3 minutes, then the client hangs attempting to see the directory. nfs-utils-1.2.3-26.el6.x86_64 service nfs status rpc.svcgssd is stopped rpc.mountd (pid 2544) is running... nfsd (pid 2609 2608 2607 2606 2605 2604 2603 2602) is running... rpc.rquotad (pid 2540) is running... cat /etc/exports /home/user XX.XX.XX.20(rw,async,no_root_squash) The client is running Centos 5.8. The directory is mounted using mount x.x.x.6:/home/user /mnt When everything is working, I get the following on the client: /usr/sbin/rpcinfo -p X.X.X.6 | grep mountd 100005 1 udp 892 mountd 100005 1 tcp 892 mountd 100005 2 udp 892 mountd 100005 2 tcp 892 mountd 100005 3 udp 892 mountd 100005 3 tcp 892 mountd When it stops working, rpcinfo just hangs on the client, however running the above command on the server does return data. There are no logs on the NFS Server side that would indicate an issue. On the client side, I see: cat /var/log/messages kernel: nfs: server X.X.X.6 not responding, still trying The client and server are plugged into the same switch, however they are on different networks. The server is a VPS while the client is a dedicated box. SELINUX is in permissive mode on both client and server, and I've turned iptables off on the server to make sure that was not causing an issue. Any ideas would be helpful - right now I'm having to restart NFS every two minutes in a cron job to keep it semi working. Thanks

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >