Search Results

Search found 8364 results on 335 pages for 'unable'.

Page 172/335 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Xdebug 2.0.5 with Zend Server CE PHP5.2.12 possible?

    - by notbrain
    I'm using Zend Server CE with PHP 5.2.12 on OSX Snow Leopard and want to use Xdebug. I've turned off Zend Data Cache, Zend Optimizer+, and Zend Debugger in the console. When I run $ cd ~/Downloads/xdebug-2.0.5 $ /usr/local/zend/bin/phpize I get Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 The PHP API Version, 20041225, seems to be off from the documentation (aka wrong). When I continue installation with $ ./configure ---with-php-config=/usr/local/zend/bin/php-config $ make $ sudo make install The installed xdebug.so seems to be the wrong one. Which version of xdebug do I need for this PHP API version? The Zend API numbers are ok. I'm just confused at why the PHP API Version doesn't match. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/zend/lib/php_extensions/xdebug.so' - (null) in Unknown on line 0 PHP 5.2.12 (cli) (built: Feb 17 2010 13:39:36)

    Read the article

  • Frustration with superuser.com user interface (please migrate this to "meta")

    - by Randolf Richardson
    Can someone please migrate this question to "meta" for me? I'm unable to post there while I wait for OpenID to fix my password problems. Thanks. I'm having problems with superuser.com's interface -- when I provide an answer to a question, sometimes the buttons get locked and then I find out that the question was migrated. Usually I can go back and copy-and-paste my answer at whatever site it goes to, but on occasion my answer is lost and I have to re-type it. This is very time-consuming, and makes it quite frustrating to use the system. In addition, I find that I'm wasting a lot of time dealing with having to re-register on the other sites. My suggestion is to not de-activate the "Submit answer" button but to just forward that along to the migrated site automatically, thus ensuring that answers that people put a lot of effort into don't get lost. Thanks in advance.

    Read the article

  • how to delete protected files in ntfs?

    - by Balchev
    Hello, I want to delete old windows directory from my system drive (C) , but I am unable to due NTFS perms.I tried from Win 7 and Win 2003, but can't do it. I tried safe mode as well with same result. I there any way to work around this (other then formating the drive)? Perhaps changing the owner or something? It errors at files like "oldwin/bfsvc.exe". Is there some "superuser" in windows similar to linux root account? Thanks

    Read the article

  • How to quickly switch monitors on multi-monitor setup? (W7)

    - by Saxtus
    At my desktop PC, I have 3 displays hooked up on my Palit GeForce GTX 480 to extend my desktop: DVI: Samsung SyncMaster T220 (22" 16:10, 1680x1050, as my main) DVI: Samsung SyncMaster 940BF (19" 4:3, 1280x1024) HDMI: Sony Bravia KDL-32D3000 (32" 16:9, 1360x768) As expected, the GFX card can only utilize 2 of the displays at any given time, so I have to manually switch using Windows' "Display Properties" dialog, each time I want to switch between them. I've tried UltraMon, DisplayFusion and PowerStrip (so please don't refer me to a question where those are given as solutions) but they are unable to switch monitors. Instead they can only change display resolutions and settings on the existing active monitors, with specially weird results when I've saved the monitor states and then tried to switch monitors using the saved states! Any idea on how to bypass the awful "Display Properties" dialog and it's "confirm your change" timer? Thanks in advance.

    Read the article

  • Host not visible in ECC after pushing master agent on server

    - by wildchild
    Hi all!! ECC master Agent has been pushed to one of the servers.. Once a Master Agent installed the host should appear within ECC.However, I am unable to see the host. I asked the Server team to stop and start the master agent ,and to check whether required port is enabled, so that server can talk with ECC server. After restarting the service I get an error.I even checked if the port is used by any other service ,it is not.. Somebody plz help what should done here..it’s a Windows server. Here's the error:- "The EMC control Center Master agent service on local computer atrted and the stopped.Some services stopped automatically if they 've no work to do,for example prformance logs and alert services" Thanks in advance!!

    Read the article

  • yum list installed including version of all installed packages CentOS 5.4

    - by Andy
    I have a list of packages installed with yum on CentOS 5.4 [root@server ~]# yum list installed ... Installed Packages GConf2.x86_64 2.14.0-9.el5 installed ImageMagick.x86_64 6.2.8.0-4.el5_1.1 installed MAKEDEV.x86_64 3.23-1.2 installed MySQL-python.x86_64 1.2.1-1 installed I would like to download these rpms locally using yumdownloader --resolve MySQL-python-1.2.1-1.x86_64 etc. However the package formatting is different (MySQL-python.x86_64 1.2.1-1 vs MySQL-python-1.2.1-1.x86_64) so I am unable to download them using the above command. I don't want to have to parse the output of yum list installed, and I also don't want to use the contents of /var/log/yum.log* as I'll have to account for erased packages and version discrepancies. However /var/log/yum.log* does have the formatting I require... May 25 14:58:15 Installed: groff-1.18.1.1-11.1.x86_64 May 25 14:58:15 Installed: bzip2-1.0.3-4.el5_2.x86_64 Any suggestions?

    Read the article

  • Bulk Deleting All Messages in a Folder in Microsoft Outlook Web Access

    - by Chris S
    How do you delete all messages in a folder in Outlook, preferrably through Web Access? I left my Outlook account unattended for several days (on vacation) and when I got back I found several folders with over 5k emails, mostly error logging or spam. When I try to open the Outlook client, it just locks up, presumably unable to download that many emails. I can view at most 100 emails at a time, but I can't select all emails to delete or permanently delete them immediately, so manually deleting this many emails is going to take a while. Gmail has a similar feature to select and delete all emails in a folder, and that's free so I figure being a quality non-free product from Microsoft, Outlook should have a similar feature (yes that's sarcasm). I've Googled, but I'm not finding anything. Is this possible?

    Read the article

  • Can't connect to YouTube from specific network

    - by Tyilo
    Using my current network, I am unable to connect to http://www.youtube.com/. It doesn't matter what browser I use or if I use a cli-command (wget, curl). Error in Google Chrome: Oops! Google Chrome could not connect to www.youtube.com Error using curl: curl: (7) couldn't connect to host If I use nslookup to get the IP-address of YouTube, I get 173.194.32.32. If I go to http://173.194.32.32/ in my browser it can connect, but as Google is probably checking the Host HTTP-header, it shows Google's frontpage instead. There is no blocked websites on the router and other devices on the network seems to work. My computer only has this problem on this specific network. I am using Mac OS X 10.8.2 on a MacBook (mid 2009).

    Read the article

  • Could not open IPV6 addr on web browser

    - by vito
    I have ipv6 address fe80::21f:a4ff:fe91:2e44%4. This is the address of modem/router. I am unable to open this address in browser to view the web configuration. I am able to telnet to fe80::21f:a4ff:fe91:2e44%4, view the console UI & can do settings. I can also open 192.168.1.1(IPv4 address of modem/router) in browser. I tried [fe80::21f:a4ff:fe91:2e44] in all browsers like IE, Firefox & Chrome. It was not possible.

    Read the article

  • Error running phusion passenger in standalone mode

    - by msidell
    I'm trying to run standalone phusion passenger so that I can run different ruby rvm configurations on the same host. I already have ruby and passenger running fine on this host. I am following the instructions here. When I run standalone passenger the first time, it appears to successfully install nginx. But then when it tries to run, I get this error: [root@clark directra]# passenger start -a 127.0.0.1 -p 3001 -d --user dweb *** ERROR *** Could not start Passenger Nginx core: nginx: [alert] could not open error log file: open() "/tmp/passenger-standalone.16757/logs/error.log" failed (2: No such file or directory) nginx: [alert] Unable to start the Phusion Passenger watchdog (/var/lib/passenger-standalone/3.0.11-x86-ruby1.9.3-linux-gcc4.1.2-1002/support/ agents/PassengerWatchdog): Permission denied (13) (13: Permission denied) Stopping web server... done FWIW, /tmp is writeable. Any idea what's wrong?

    Read the article

  • needs updated glibc package version 3.4.15 or later for RHEL6

    - by Tejas
    I want to upgrade my current running applications to latest version. But due to some package issue i am unable to install them. I get common error in that: /usr/lib64/libstdc++.so.6: version 'GLIBCXX_3.4.15' not found. When i tried to update glibc package i get following output: [root@agastya ~]# yum install glibc Loaded plugins: refresh-packagekit, rhnplugin epel/metalink | 3.8 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.0 MB 01:33 epel-testing/metalink | 3.8 kB 00:00 epel-testing | 4.3 kB 00:00 epel-testing/primary_db | 295 kB 00:03 rhel-x86_64-server-6 | 1.8 kB 00:00 rhel-x86_64-server-6/primary | 11 MB 02:02 rhel-x86_64-server-6 8816/8816 Setting up Install Process Package glibc-2.12-1.80.el6_3.6.x86_64 already installed and latest version Nothing to do [root@agastya ~]# Should i need to add some more repositories? If yes, how?

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • DFS replication initial step problem

    - by vn.
    I just setup DFS on my network and it's working fine, and now I'm trying to setup DFS-R on a test folder, but then at the end of the procedure (all went fine, selected my 2 folders, primary folder, replication topology and such) I get this error message (roughly translated from french) : Unable to define security on the replicated folder. The shared administration folder doesn't exist. I'm also wondering if there's any required security on the folders to replicate so that DFS-R can access it. I was trying to add SYSTEM in the security, but it won't find it/allow me. The folder has many many files and folders on the primary DFS pointer, but none on the 2nd, just created it with quite the same rights. Note that the primary DFS pointer is on a 2008 server and the DFS service and the secondary DFS pointer are on a 2008r2. Any help is very appreciated, thanks.

    Read the article

  • Bluetooth radio device is not available

    - by Mike
    I reinstalled my Windows 7 operating system, and have since been unable to detect my Logitech m555b bluetooth mouse. Here is some info about my system: I have have a working connection with a bluetooth printer. I have a message in the "My Bluetooth" section of explorer stating "Bluetooth radio device is not available". The Device Manager indicates that the Generic Bluetooth radio is working correctly. The bluetooth "F12" switch is on. Mouse batteries are new and the right way around. I'm presuming that I need a bluetooth driver which is non generic. The computer is a Clevo P150HMx (the version with the GTX485 GPU) with the default WLAN / bluetooth combo card manufactured by Realtek (I think it's a Realtek RTL8188CE card). I think I have the drivers installed, but I still get the generic driver in the Device Manager. I'm confused. Please help, I'm going mad on the touchpad. (Thanks for the touchup wizlog)

    Read the article

  • Dell EqualLogic vs. EMC VNXe [closed]

    - by Untalented
    We've been looking into SMB SANs and based on the competitive pricing I've been getting we're really liking these two array's. There are some pro's to both solutions, but I've unable to really decide which to choose. The EMC offers better expandability since you can buy an additional shelf (roughly $1200) and can add drives then to the array. However, the Dell unit is still very nice. Can anyone comment on their experiences with the two and thoughts on this? Also, to get the VMware Storage API support you need VMware Enterprise. How much additional performance does this provide? It's roughly $15k more than the Essentials Plus bundle we're looking at (this is a small environment [3 Hosts 1 Array].

    Read the article

  • Is it possible to backup an SQL Database hosted on an Azure VM, to our internal DPM2012?

    - by Florent Courtay
    I've got an SQL database on an azure VM (non domain) that i'd like to backup to our internal DPM 2012 server. I've installed the DPM agent on the Azure VM, setup DCOM to use only the ports 5000 to 5025 on both the VM and the DPM server, created the 135, 5000-5025, 5718 5719 endpoints on azure and on the VM's firewall. When trying to add this agent to the DPM server, I end up with an error, "Unable to contact the protection Agent on server .cloudapp.net" I know there is some sort of connection between them, as using a wrong password gives me an Invalid Credentials error. The error seems to be DCOM related : When trying to connect to the Azure VM from the DPM server using VBEMTest, i get an Error "0x800706ba The RPC server is unavailable", but access is deneid when using wrong credentials ) What am i missing ? Has someone been able to achieve this kind of setup ? Thanks for your help !

    Read the article

  • IIS7 response size thresholds

    - by DanielM
    I have a customer who is attempting video playback via HTTP progressive download of very large files ( 1 GB). There is no problem once a file is cached at the edge via my CDN, but hits to my origin (first hits prior to edge-cache population) experience stalling and loss of sync between audio and video about an hour and a half into playback. This occurs pretty reliably at that point, suggesting that some threshold somehwere is getting hit. Are there IIS configuration knobs governing HTTP Response size? Other data points: I am unable to replicate this problem. I am looking at client bandwidth and last mile issues. I am looking at possible encoding recipe dependencies. But this problem never came up when we were using a "push" cache configuration (CDN-hosted origin), so something funky serverside at my origin seems like a likely culprit. Thanks ...

    Read the article

  • "sh: /usr/sbin/xenstored: not found" - But it's there?

    - by Matt H
    What would cause running the file /usr/sbin/xenstored to print sh: /usr/sbin/xenstored: not found However, the file /usr/sbin/xenstored is there and is not a symbolic link. Actually I should be running this as root. That prints a similarly odd message. sudo: unable to execute /usr/sbin/xenstored: No such file or directory By the way, xenstored is not a script, it's an ELF executable. My guess is that it's because I haven't gotten all the dependent libraries installed. However, I would expect it to say something like this: ./xenstored: error while loading shared libraries: libxenctrl.so.4.0: cannot open shared object file: No such file or directory Which is true of running xenstored on a system that doesn't have all the required libraries. Why do I get "not found" vs the much more useful "cannot open shared object file"?

    Read the article

  • Creating a FAT file system and save it into a file in GNU/linux?

    - by RubenT
    I tell you my problem: I want to create a FAT file system and save it into a so I can mount it in linux using something like: sudo mount -t msdos <file> <dest_folder> Maybe I'm wrong and this cannot be done. Anyway, the problem is this: I'm trying to create the file containing a FAT file system, and I'm running this command: sudo mkfs.vfat -F 32 -r 112 -S 512 -v -C "test.fat" 100 That, accordingly to the mkfs man page, will create a FAT32 file system with 112 rootdir entries, logical sector size of 512 bytes, 100 blocks in total, and save it into "test.fat". But it fails, and the bash tells me: mkfs.vfat: unable to create test.fat What is going on? I think I am misunderstanding how mkfs works and how to use it. It is possible to write a filesystem into a file?

    Read the article

  • Configuring Linux Network

    - by Reiler
    Hi I'm working on some software, that runs on a Centos 5.xx installation. I'ts not allowed for our customers to log in to Linux, everything is done from Windows applications, developed by us. So we have build a frontend for the user to configure network setup: Static/DHCP, ip-address, gateway, DNS, Hostname. Right now I let the user enter the information in the Windows app, and then write it on the Linux server like this: Write to /etc/resolv.conf: Nameserver Write to /etc/sysconfig/network: Gateway and Hostname Write to /etc/sysconfig/network-scripts/ifcfg-eth0: Ipaddress, Netmask, Bootproto(DHCP or Static) I also (after some time) found out that I was unable to send mail, unless I wrote in /etc/hosts: 127.0.0.1 Hostname All this seems to work, but is there a better/easier way to do this? Also, I read the network configuration nearly the same way, but if I use DHCP, I miss som information, for instance the Ip-address. I know that I can get some information from the commandline (ifconfig), but I dont get for instance Hostname, Gateway and DNS. Is there a commandline tool that will display this?

    Read the article

  • Intsalling Linux on PowerEdge R410 via USB

    - by Bill Johnson
    I’m hoping someone can help me with the following issue. I have a Dell PowerEdge R410 and basically the Optical Drive has failed when I have been given the server. I have installed 2 SATA drives and want to install Ubuntu 11.04; however, each time I have tried i.e. using bootable .iso on USB it failed. I assume it's failing as with a lot of releases they all look at the CD drive. Ubunutu has failed on installation with the error message unable to mount CD. I have tried installing Microsoft Hyper-v and that also fails as during installation it asks for CD/DVD drivers. Tried embedding ISO's from various distro's (Linux and Windows) with drivers and that hasn't worked out either. Does anyone have any idea on how I can get Ubuntu on this server? Should I look towards an old distro perhaps?

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • Making fdisk see software RAID 0

    - by unknownthreat
    I am following http://grub.enbug.org/Grub2LiveCdInstallGuide and I am using software RAID 0. I am using Ubuntu 10.10 LiveCD and is trying to restore grub2 after installing Windows 7 in another partition. Here is the console's outputs: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sad ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So do you have an idea for how to make fdisk see my RAID array? How to make fdisk detect the Software RAID like dmraid?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >