Search Results

Search found 18803 results on 753 pages for 'c link'.

Page 594/753 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Nvidia Linux Driver Huge Resolution

    - by darxsys
    I'm trying to setup a working CUDA SDK on my Linux Mint. I'm new to Linux and everything connected with it. So, I tried following some steps on how to install CUDA. Firstly, I downloaded a Linux driver from here: http://developer.nvidia.com/cuda/cuda-downloads version 295.41. After that, I barely found a way to run it. I did it like this: 1. typed in sudo init 1 in terminal and switched to root 2. typed service mdm stop 3. ran the *.run file downloaded from the link above Then it started installing the driver. It gave some warning messages, but I ignored it. After installation, I typed init 5 and it came back to GUI screen, BUT everything is huge. I restarted, still huge. My screen resolution is 640x480 on a 17 inch laptop monitor. I tried running Nvidia X Server Settings, but it says: "You do not appear to be using Nvidia X Driver. Please edit your X configuration file." I tried that. Nothing happened. I cant change the resolution because that Nvidia Settings thing gives no options. Then I googled some things, installing some packages - nothing. The biggest problem is I don't understand whats really going on. My laptop is a Samsung with i7 and Nvidia Gt 650M with optimus. I cant even install bumblebee, but that is something I will try if I manage to get my resolution to default. Please, help!

    Read the article

  • Installing Trac on Windows under Apache 2.2?

    - by Warren P
    Trac is a python-powered bug-tracking and project-management app. According to Trac's wiki, there are several options for installing Trac, a standalone server (tracd), or under a dedicated webserver using one of these options: FastCGI - Not available on windows. mod_wsgi - No version of mod_wsgi available for Apache 2.2.22 and Python 2.7.3-amd64 that actually runs on my system! mod_python - no longer recommended, as mod_python is not actively maintained anymore) CGI -should not be used, as the performance is far from optimal) That leaves me with zero ways to run Trac on Windows. Apache 2.2.22 with ModWSGI loading, crashes the Apache2.2 service on startup without any error logs. Disabling the line in the apache configuration to load mod_wsgi restores sanity. I just want an installation of Trac on windows with Authentication enabled. I am unable to get authenetication to work using basic tracd like this: tracd -p 8000 --basic-auth="c:\tmp,c:\tmp\Passwords.md5.txt,mycompany" c:\tmp\RootFolder And I am unable to get Mod_WSGI installed. I'm going to keep trying to figure out a combination that works, I suspect I should have installed 32 bit python instead of 64 bit python, to start with. Did I do wrong to install Python 64 bit 2.7.3? I tried again with all 32 bit components, and still can't get MOD_WSGI to work with apache 2.2.22. I'm going to try to compile mod_wsgi myself with Visual C++ Express 2010, but it seems to me that it ought to be easier than this to get Trac running on windows, with authentication. Is there a way to run Trac on Windows, under Apache, with authentication? The last "Trac on windows" article died in 2008, leaving only this internet archive link for "Trac on windows" setup.

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Gmail.com detect mail as spam, but the server is not on any BlackList

    - by Tomer W
    I have an issue with Google. (GMail to be exact) About 1 month ago, we had a security breach, and mail was relayed through our servers. we got listed in almost ALL Black-Lists :( we fixed the problem, and requested removal from Black-lists, which was granted easily. currently (over 3 weeks), we are not sending any spam anymore. furthermore, we got clear from all the Black-lists (MxToolBox Black-List Search Result) But, GMail still refuse to get Anything from the server, stating '550 Spam'. Following, Telnet attempt to send to gmail: 220 mx.google.com ESMTP g47si45436208eep.123 helo megatec.co.il 250 mx.google.com at your service mail from: <[email protected]> 250 2.1.0 OK g47si45436208eep.123 rcpt to: <[email protected]> 250 2.1.5 OK g47si45436208eep.123 Data 354 Go ahead g47si45436208eep.123 Test123 . 550-5.7.1 [62.219.123.33 11] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. g47si45436208eep.123 Connection to host lost. i tried filling the form @ Gmail - Report Delivery Problem i also tried reaching Google by phone, but the message was to go to the Link mentioned above. I Checked ReverseDNS and is ok... We dont have TLS, but that shouldn't be a problem, shouldn't it? Note: we are not a Bulk sender. Anyone has an idea? what can be blocking our IP? Anyone know whom can be contacted in order to resolve this BL listing?

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • django : Serving static files through nginx

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

  • pgpoolAdmin keeps going straight back to it's login page

    - by user705142
    I'm trying to run pgpoolAdmin through nginx - it seems to be working properly, at least initially. I've gone through the initial set-up, which works fine, but now after logging in every link takes me straight back to the login page. It also shows japanese text instead of english, despite picking english in the installation. It seems to me just as if it was unable to save any user data, session information etc. I have javascript/cookies enabled, so it's not that. The ownership of the folder is nginx, and so too is pgmgt.conf.php, so it shouldn't be a problem with permissions. One potential issue is that I can't seem to see any confirmation that php postgresql support is enabled in the php info screen, despite the correct package installed and in the config line. Any ideas as to what's happening here? The nginx rules are pretty standard: server { # pg-pool admin listen 997; server_name localhost; root /opt/pgpooladmin; index index.php; location ~ .php$ { fastcgi_pass_header Set-Cookie; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Symlinks are inaccessible by their full path on OS X

    - by Computer Guru
    Hi, I have symlinks pointing to applications placed in /usr/local/bin which is in the path. However, I can't run these applications from other folders. Even more weird, I can't access them by the full path to the symlink. [mqudsi@iqudsi:Desktop/EasyBCD]$ echo $path (03-26 13:42) /opt/local/bin /opt/local/sbin /usr/local/bin /usr/local/sbin/ /usr/local/CrossPack-AVR/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/X11/bin [mqudsi@iqudsi:local/bin]$ ls -l /usr/local/bin (03-26 13:47) total 24280 -rwxr-xr-x 1 mqudsi wheel 18464 May 14 2009 ascii-xfr -rwxr-xr-x 1 mqudsi wheel 12567 Mar 25 04:50 brew -rwxr-xr-x 1 mqudsi wheel 17768 Dec 11 12:41 bsdiff -rwxr-xr-x 1 mqudsi wheel 43024 Mar 28 2009 dumpsexp -rwxr-xr-x 1 mqudsi wheel 280 Sep 10 2009 easy_install -rwxr-xr-x 1 mqudsi wheel 288 Sep 10 2009 easy_install-2.6 -rwxr-xr-x 1 mqudsi wheel 39696 Apr 5 2009 fuse_wait lrwxr-xr-x 1 mqudsi wheel 29 Mar 25 04:53 git -> ../Cellar/git/1.7.0.3/bin/git [mqudsi@iqudsi:local/bin]$ /usr/local/bin/git (03-26 13:47) zsh: no such file or directory: /usr/local/bin/git Clearly the link is there, but I'm not able to get it to it :S

    Read the article

  • Should I use Evernote or Org-mode for taking notes?

    - by tobeannounced
    I am looking for an app that will help me manage my notes, and after coming across Org-mode, I was wondering whether Org-mode's functionality is strong enough that it can remove the need for me to use another note taking app (because org is more of a task management app), such as Evernote. My wishes for a note taking app are: can be accessed offline in some form, eg through an iPhone app or desktop client Org-Mode and Evernote can both do this, however it seems like MobileOrg is more aimed at tasks, rather than notes? If this is the case, I probably would use Evernote in addition to MobileOrg. I can clip web content into easily for research Evernote has the browser extension, how is it with Org-Mode? I know I can use c-c c-l, but how suited is it really for taking notes on stuff I am browsing in Chrome/Firefox? has voice notes on the iPhone and computer too, if possible Org-Mode cannot do this on the iPhone, on the computer could I record audio externally and then link the files in? I can add notes too on my iPhone & computer while not connected to the internet both can do this. The types of notes I am likely to have include: howtos/things I have learnt, documentation on my setup/stuff, research on things I may do in the future, ideas, and task specific notes. I have thought about where I would want to access each of these notes and will post that here if you think it would help. So, is Org-mode strong enough in note-taking and the requirements I listed that I can avoid the need to use a separate tool for taking notes?

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Microsoft Arc Mouse OS X

    - by meepz
    I recently bought a new Mac Book Pro with Mountain Lion 10.8 on it. The only portable mouse I have is my Microsoft Arc Mouse. I wanted to use it with the laptop so I installed the IntelliPoint 8.2 for Mac from Microsofts website. According to their website, this driver is for OS X 10.4-10.7. Now I thought that wouldnt be too much of an issue but unfortunately for me, the driver installs fine and the mouse is detected but I get no movement and when I click the buttons nothing happens. I took the mouse with me on a business trip to EU and before I left I checked if the mouse worked with my desktop which is running Windows 7 and it worked without any hiccups. I'm not too sure where the OS differs from 10.7 to 10.8. I found an article online but it doesn't pertain to my mouse, although it could be of assistance. I have tried my version of their adjustment but I am not too knowledgable on low level hardware/software modifications so I may have done it wrong. heres the link: http://refluxions.wordpress.com/2008/08/18/mac-os-x-mouse-madnessfixed/ I get the following details when I check mouse info in the IntelliPoint preferences pane: The following Microsoft mouse devices are currently connected to your Macintosh driven by the Intellipoint software. Arc Mouse Vendor name: Microsoft Product name: Microsoft AE 2.4GHz Transceiver 5.0 Vendor ID: 045E Product ID: 074F Device version: 0140 if anyone has any suggestions on how to fix this it would be greatly appreciated! I love the mouse and im here in EU for another two weeks. Thanks

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • How do I resolve certificate errors on HP blade center

    - by Martin Hilton
    I'm trying to sort out the ssl certificate errors that we get when trying to manage our HP c7000 blade enclosures. To that end I have created a signing certificate and imported it into the browser. In Onboard Administrator I created a certificate signing request, which I signed with my CA and then uploaded the certificate. This worked perfectly, and I no longer get any SSL errors when connection to Onboard Administrator. The problem comes when trying to connect through Onboard Administrator to the iLo on the blades themselves. Done by clicking on the "Web Administration" link. Onboard Administrator links to the blade with it's IP address rather than host name. But the certificate signing request that iLo creates uses the host name. Even when this certificate is signed the browser still complains it is for the wrong domain. I either need to be able to get Onboard Administrator to connect to the blades using host name rather than IP address, or get a certificate signing request which contains the IP address as the CN rather than the host name. It doesn't particularly matter which. Does anybody know how to configure this?

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Why do I often have to refresh pages I navigate to once for them (or content in them) to load?

    - by GetOutOfBox
    I have noticed a bizarre pattern when using my PC, that when I open a link to a website, it often will often take a very long time to load, or time out. Sometimes content on the website will be drawn, but again, it seems to get "stuck" for an unusual amount of time before finishing. Most affected is Youtube; almost every time I navigate to a youtube video from another website such as Google, the video will not begin playing, but will instead just display the player controls with a black screen where the video should be and the buffering symbol, usually before displaying an error such as "The video failed to load". The unusual part of this problem is that whenever this happens, refreshing the page always causes it to load almost immediately the second time around, without any problems. Note that I'm not talking about how some browsers will dump whatever has been cached to the "pallet" briefly when the page is refreshed or loading stopped; but that the second time loading the website being faster. I have done my best to rule out some of the obvious causes: My Windows 7 desktop computer is the only device that seems to be affected. I use Firefox on it (latest version, flash updated, etc). My connection has more than enough bandwidth (30 megabits down, 4 up), and I've even tried QoSing all other devices to make sure this isn't happening due to usage spikes. Wireshark is not showing any clearly unusual network activity (i.e frequently dropped packets).

    Read the article

  • PFSense VPN Routing

    - by SvrGuy
    We use PFSense firewalls at three installations with the following LAN networks: 1.) Datacenter #1: 10.0.0.0/16 2.) Datacenter #2: 10.1.0.0/16 3.) HQ: 10.2.0.0/16 All of these locations are linked via an IPSEC tunnel that works properly. Hosts in any of the above networks can communicate with hosts in any other of the above networks. Now, for our laptops etc. we established a road warrior network 10.3.0.0/16 and have implemented OpenVPN to link the laptops etc. to Datacenter #1. This works great too, so our laptops can connect and communicate with any host in Datacenter #1 (anything on 10.0.0.0/16) The problem is the laptops can't communicate with any hosts that Datacenter #1 can reach by its IPSEC tunnel to Datacenter #2 (and/or the HQ for that matter). Does anyone know what to do configuration wise on the PFSense box in Datacenter #1 to configure to route packets received on the OpenVPN tunnel to Datacenter #2 over the IPSEC tunnel? It could be a setting on the OpenVPN or some sort of static route or some such. Any ideas?

    Read the article

  • Need Info on the Hidden Switch in SET - "/S" How to implement

    - by ttyl
    I am having some problems doing a proper search of "SET/S" or "SET /S" on google and other search providers. The difficulty arises with the SLASH "/", it is commonly used in search engines to add a "nearness" to the search parameter. I have found no way to escape the SLASH when searching for a SLASH. For those on this community, try searching this domain with the two search terms listed above. It just doesn't work, it ends up looking for SET S instead. But I digress. So Im asking the uber-guru's on this board to help me find out about the documentation of /S and how to implement SET /S in a batch file. SET is an internal DOS/cmd commandand allows many things incuding prompting the user, integer math and writing environment strings. in looking at this link: http://www.robvanderwoude.com/os2set.php it appears that the /S is only for OS2 but im thinking that this might not be the case, due to this: http://www.dostips.com/forum/viewtopic.php?t=2704, apparently used with substings and macros. any help is much appreciated

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >