Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 201/336 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • cURL looking for CA in the wrong place

    - by andrewtweber
    On Redhat Linux, in a PHP script I am setting cURL options as such: curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, True); curl_setopt($ch, CURLOPT_CAINFO, '/home/andrew/share/cacert.pem'); Yet I am getting this exception when trying to send data (curl error: 77) error setting certificate verify locations: CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none Why is it looking for the CAfile in /etc/pki/tls/certs/ca-bundle.crt? I don't know where this folder is coming from as I don't set it anywhere. Shouldn't it be looking in the place I specified, /home/andrew/share/cacert.pem? I don't have write permission /etc/ so simply copying the file there is not an option. Am I missing some other curl option that I should be using? (This is on shared hosting - is it possible that it's disallowing me from setting a different path for the CAfile?)

    Read the article

  • Make backups of Dropbox folder every week

    - by ilansch
    I have a Dropbox folder which is shared by couple of users. I would like to make a backup of this folder that will occur every week and store this backup on another hard drive. I can simply copy the entire folder each time and this will be the backup, but I would like to copy only the files that have been changed or created during that week. I thought of creating a batch script that will check each file in the Dropbox folder recursively and see its modified date. If that date is later then a given one (current backup date) it will copy the file to a folder named BackUP[Date]. Do you think this solution is OK?

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • Shell script for replacing string in all PHP-files, for each user

    - by Mads Skjern
    Each user has some php-files using a shared database commondb. I want to iterate over all users (in users.csv), and in their home folder (e.g. /home/joe) find all php files recursively, and replace each occurrence of "commondb" with their own databasename, e.g. "joedb" for "joe". I have tried the following: #!/bin/bash # Execute like this: # bash localize.bash users.csv OLDIFS=$IFS IFS="," while read name dummy do echo $name find /home/${name} -name '*.php' -exec sed -i '' 's/commondb/${name}db/g' "{}" \; done < $1 IFS=$OLDIFS for users.csv joe, Joe J george, George G It does not fail, but the files are unchanged. I am quite weak in bash, and I can't figure out how to debug it :/ Can my script be fixed to work?

    Read the article

  • How do I create certificates for both ends of an stunnel connection?

    - by unixman83
    Hi. I am using stunnel to authenticate RDP (Remote Desktop) and I need to verify that a client possesses the proper credentials. So people cannot brute force into the machine. I am also using a bad (outdated) version of RDP that has security vulnerabilities, so stunnel is a must. I will preshare the necessary .pem's between machines. What are the openssl commands I need to create the right .pem files on both the client and on the server? What files need to be shared?

    Read the article

  • wildcard in httpd conf file?

    - by Joe
    Here is an example httpd config I'm currently using: <VirtualHost 123.123.123.123:80> ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /home/folder </VirtualHost> I'm wondering, is it possible to have a wildcard for the ServerName & ServerAlias variable? Reason for asking is I have some software that is shared among multiple URL's all controlled in a CMS and it's kind of a pain to add new domains via ssh everytimee. And before someone points out a security hole, the software does check the current URL before doing any webpages :)

    Read the article

  • Can you shrink the sparse disk image of a Mac OS X guest OS in VMWare Fusion?

    - by Paul D. Waite
    I use VMWare Fusion on my Mac to run a virtual Windows 7 machine, and the Microsoft IE compatibility Windows XP virtual machines. In VMWare Tools on the Windows guest OSes, there’s a “Shrink” option that lets you reduce the size of the sparse disk image used by the guest OS, to save hard drive space on your host OX. I’ve recently created another virtual machine, this time running Snow Leopard Server. I was wondering if I could shrink the spare disk image used by this machine too, but I can’t find a VMWare Tools app on the Mac guest OS, even though VMWare Tools have been installed (as VMWare’s Shared Folders feature is working). Is there any way to shrink the sparse disk image used by Mac OS X guest OSes in VMWare Fusion?

    Read the article

  • Drag & Drop using jQuery-ui

    - by Dhruva Sagar
    Hi, I am currently working on a project where I have to create a custom calendar sort of application to display and manage appointments easily. I need to be able to drag and reschedule appointments appropriately. jQuery-ui is pretty neat and I am able to achieve almost everything except that I require that no appointments (divs) may overlap. I am not able to figure out how to achieve this. If someone could guide me into the right direction for this, it would be great. Thanking you in advance for your time and patience.

    Read the article

  • IPtables Traffic Quota - up and down

    - by Nick
    I've been trying to set up traffic quotas for users on a shared server and i believe [with my limited knowledge] that iptables --quota and ports which have been selected for each user [--dport] is the way to do this... iptables -A OUTPUT --dport 1,2,3,4... --quota 123412341234 -j ACCEPT iptables -A OUTPUT --dport 1,2,3,4... -j DROP I think something like this would work to limit the traffic [and reset every month] but its only for traffic going out. Is there something I could do to combine -A OUTPUT and -A INPUT into one quota? Or, is there a different method I could use to achieve the same thing more efficiently? OS is debian squeeze Thanks.

    Read the article

  • Remove Sync Center icon

    - by Edward Brey
    I accidentally marked a shared folder as "Available Offline" in Windows Explorer on Windows 8.1 computer. This seems to have "woken up" the Sync Center and caused the Sync Center icon to be displayed in the system notification area. Even though I've undid that by marking the folder as not available offline, and furthermore have reset CSC and disabled Offline Files, the Sync Center icon still appears in the overflow section of the system notification area. How do I remove the Sync Center icon and preferably disable the process that is displaying it? Debugging info: The registry shows that stuff is enabled, even though the Sync Center and Offline Files dialog don't indicate that anything is active. HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\SyncMgr\HandlerInstances\{750FDF10-2A26-11D1-A3EA-080036587F03} SyncTime REG_BINARY F6DDC46CBB76CF01 Connected REG_DWORD 0x1 Enabled REG_DWORD 0x0 Active REG_DWORD 0x1 NotifiedOnFirstActivation REG_DWORD 0x0 HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\SyncMgr\HandlerInstances\{750FDF10-2A26-11D1-A3EA-080036587F03}\SyncItems HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\SyncMgr\HandlerInstances\{750FDF10-2A26-11D1-A3EA-080036587F03}\SyncItems\{CBA95344-4284-48CB-8083-3BDE1FDB29A7} SyncTime REG_BINARY F6DDC46CBB76CF01 Connected REG_DWORD 0x1 Enabled REG_DWORD 0x1

    Read the article

  • Suggestions on providing HA access to an external (fibre) RAID subsystem

    - by user145198
    We are looking at upgrading our storage capacity with an external RAID subsystem that has redundant (2) fibre controllers, each controller has 4 x 8 Gbps fibre ports. I would like to make access to this storage system occur via HA Linux. Ideally I would connect 2 fibre ports from each controller into each Linux server, and then export either NFS or iSCSI via a 10 Gbe interface. I have seen plenty of references to DRBD, however all of those references tend to use block storage that is solely attached to each machine, rather than having a shared block storage device, so I am unsure if DRBD could (or should) be used in this case. Ideas?

    Read the article

  • Reverse DNS for two ADs in the same subnet

    - by SpacemanSpiff
    I currently have two separate AD forests that exist within the same subnet. The two forests have independent copies of the reverse lookup zone for that subnet. Example: Domain A DC1: 10.1.1.1/24 Domain A DC2: 10.1.1.2/24 Domain A AppServer1:10.1.1.3/24 Domain B DC1: 10.1.1.11/24 Domain B DC2: 10.1.1.12/24 Domain B Appserver1:10.1.1.13/24 What I'm after, is a configuration that allows this reverse zone to be shared between them so that both sets of DNS servers can make updates to the zone. This kind of thing is a little far from my everday work, so a kick in the right direction is a welcome suggestion as well. Decoupling one AD into new segments is a possibility I'm open to but would like to avoid if possible. If there is a DNS related solution I'd prefer that.

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • How to configure auto-logon in Active Directory

    - by Jonas Stensved
    I need to improve our account management (using Active Directory) for a customer support site with 50+ computers. The default "AD"-way is to give each user their own account. This adds up with a lot of administration with adding/disabling/enabling user accounts. To avoid this supervisors have started to use shared "general" accounts like domain\callcenter2 etc and I don't like the idea of everyone knowing and sharing accounts and passwords. Our ideal solution would be to create a group with computers which requires no login by the user. I.e. the users just have to start the computer. Should I configure auto-logon with a single user account like domain\agentAccount? Is there anything else to consider if I use the same account for all users? How do I configure the actual auto-logon with a GPO on the group? Is there a "Microsoft way" without 3rd party plugins? Or is there a better solution?

    Read the article

  • Why people are so afraid of using clone() (on collection and JDK classes) ?

    - by Bozho
    A number of times I've argued that using clone() isn't such a bad practice. Yes, I know the arguments. Bloch said it's bad. He indeed did, but he said that implementing clone() is bad. Using clone on the other hand, especially if it is implemented correctly by a trusted library, such as the JDK, is OK. Just yesterday I had a discussion about an answer of mine that merely suggests that using clone() for ArrayList is OK (and got no upvotes for that reason, I guess). If we look at the @author of ArrayList, we can see a familiar name - Josh Bloch. So clone() on ArrayList (and other collections) is perfectly fine. (Just look at the implementation). Same goes for Calendar and perhaps most of the java.lang and java.util classes. So, give me a reason why not to use clone() with JDK classes?

    Read the article

  • Converting DOCX files to PDF via SSH without losing formatting

    - by Reado
    I'm struggling to find a solution that will allow me to convert a DOCX file to a PDF without losing or malforming the formatting of the document on CentOS 5.7. I have tried CUPS-PDF but it doesn't work; spool files appear in the /var/spool folder but nothing happens after that. OpenOffice and LibreOffice converted a DOCX to PDF but the formatting was all wrong. However if I print the DOCX to a Windows PDF printer from my Windows 7 workstation, it outputs to PDF absolutely fine. So why can't Linux do the same? I tried to print via CUPS to the Windows PDF printer (shared) but the document appears in the queue as "Remote Downlevel Document" and doesn't print. This only happens when I print from Linux.

    Read the article

  • Cannot figure out how to get rid of memory leak

    - by Mark S.
    I'm trying to test for memory leaks in my iphone and I'm not having much luck getting rid of this one. Here is the code that is leaking. - (id)initWithManagedObjectContext:(NSManagedObjectContext *)aMoc delegate:(id)aDelegate runSync:(BOOL)aRunSync { if (self = [super init]) { self.moc = aMoc; self.settingsManager = [[VacaCalcSettingsManager alloc] initWithManagedObjectContext:self.moc]; self.delegate = aDelegate; calendar = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; self.runSync = aRunSync; } return self; } It is leaking on the self.settingsManager = [[VacaCalcSettingsManager alloc] initWithManagedObjectContext:self.moc]; line. The self.settingManager instance variable is released in the dealloc method of the class. I'm not sure what other information would be pertinent. Please let me know and I can provide it. Thanks for any assistance. -Mark

    Read the article

  • Best way to block a country by IP address?

    - by George Edison
    I have a website that needs to block a particular country based on IP address. I am more than aware that IP-based blocking is not a foolproof method for blocking visitors, but it is a necessary step in the right direction. Since I'm using PHP, what I would do is use a GeoIP database like geoplugin.net. However, I'm curious to know if there's a better way of doing this. The website is on a shared webserver (I don't have root access) and it is running Apache on centOS. I guess my question is "can an .htaccess file be configured to block by IP using an external source to lookup IP addresses."

    Read the article

  • Taking over and Moving a PHP site

    - by KCavon
    I have a internal use PHP site at my new position. It only runs a few days a year off site so we keep it on laptops. The hardware it has been on, a 8 year IBM Thinkpad running Fedora, is dying. I have new Lenovo Thinkpad's running latest and greatest Ubuntu. I have copied the contents of var to a shared drive, renamed the old www folder in var on the new machine and copied over the old www folder. I can get to the login page and into the site, but when I look something up it returns Cannot Open. I know I cannot get to the MySQL in the new machine because users and passwords dont match. The version of the PHP from the old machine is before the setup script was included. I know very little about PHP. I am looking for input on the proper way to link the old PHP files to my mysql instance. Any help, much appreciated.

    Read the article

  • Sharing wireless internet connection between 2 Ubuntu 10.04 systems using cross-over cable

    - by Gary
    I have a Lenovo T60 with Ubuntu 10.04 connected with a cross-over cable to a Dell Vostro 1400, also running Ubuntu 10.04. My internet is coming into the Lenovo through an external wireless antennae, and I want to share the internet with the Dell. On the Lenovo: eth0 connection has IPv4 Settings 'Shared to other computers' I can ping the Dell (10.42.43.10) successfully I can use mtr to trace to www.google.com successfully On the Dell: eth0 connection has IPv4 Settings 'Automatic DHCP' I can ping the Lenovo (10.42.43.1) successfully when I use mtr to trace to www.google.com, I can only reach 10.42.43.1 I must be missing some setting, but cannot see what it is; can anyone help me?

    Read the article

  • Can domain "masking" be setup in BIND\cPanel

    - by ServerAdminGuy45
    I am supporting a client, let's say he has the domain "acme.com". He registered with GoDaddy and set the name servers to point to his crappy hostgator shared account. He uses cPanel on the hostgator account to set up his subdomains. Is it possible to setup some kind of domain masking so that when someone connects to "application.acme.com", it really forwards to "cloud-solution-provider.com". I mean the actual domain "cloud-solution-provider.com" because it resolves to different IPs based upon geolocation. For this reason I can't just set application.acme.com to point to the IP that cloud-solution-provider.com resolves to. I want the ability for a user to RDP to "application.acme.com" and be sent to the desktop served by "cloud-solution-provider.com", whatever that IP may be. Perhaps I can have GoDaddy be the nameserver? I have a feeling this would break Hostgator since there is a website at acme.com and shop.acme.com

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • UNC vs. SFTP vs. SSH for uploading to a Windows server

    - by apollodude217
    I understand that UNC, SFTP, and SSH are, of course, different interfaces (protocols?). But feature-wise, how do they differ? Are there things you can do with one that you cannot do with another? Is one more secure than another? The situation I want to fix is one where we have several Windows servers and VPC's, some of which have SFTP servers and some of which don't. For those that don't we use UNC over a VPN shared by the entire enterprise. What I want to do is either use all UNC, all SFTP, or all SSH (unless a real need to vary on a case-by-case basis presents itself). Links would be excellent. My biggest problem here is that my googling brings up irrelevant results. :(

    Read the article

  • How to connect with MySQL server if it won't connect via the socket?

    - by cwd
    I have an account on a shared server. I have jailshell access and also PhpMyAdmin. I want to run mysql commands via SSH but I'm getting an error: $ mysql -u mySqlUser -p mySqlPw Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' I can connect with PHP and phpMyAdmin, so would it be possible to call mysql from the shell and have it connect via an ip and port instead of the socket? The file /var/lib/mysql/mysql.sock does not exist - maybe that is intentional, and the only thing in /etc/my.cnf is [mysqld] skip-innodb More Info I don't have access to change system settings. I did a search in /var for mysql.sock but found nothing. However, phpMyAdmin might be connecting via a socket somehow: Really it would just be great if I could connect via IP. Also tried these two syntaxes: $ mysql -u mySqlUser -p mySqlPw -h localhost $ mysql -u mySqlUser -p mySqlPw -h localhost -P 3306 Both with the same result: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >