Search Results

Search found 13697 results on 548 pages for 'linking errors'.

Page 401/548 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Apache server configuration name resolution (virtual host naming + security)

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • Nginx Not Passing URL Parameters

    - by jmccartie
    Messing around with Nginx ... for some reason, it looks like none of my URL parameters are being passed. My homepage loads fine, but a URL like "http://mysite.com/more.php?id=101" throws errors, saying that the ID is an undefined index. I'm assuming this is something basic I'm missing in a conf file. Some info: conf.d/virtual.conf server { listen 80; server_name dev.mysite.com; index index.php; root /var/www/dev.mysite.com_html; location / { root /var/www/dev.mysite.com_html; } location ~ \.php(.*)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/wap/dev.mysite.com_html/$fastcgi_script_n ame; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Error log: 2009/06/22 11:44:21 [notice] 16319#0: start worker process 16322 2009/06/22 11:44:28 [error] 16320#0: *1 FastCGI sent in stderr: "PHP Notice: Undefined index: id in /var/www/dev.mysite.com_html/more.php on line 10 Thanks in advance.

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Gerrit ssh key setup on windows server

    - by hotpotato
    I am attempting to configure google's 'Gerrit' code review web app on a Windows server 2008 virtual machine on our internal network. We are using Apache Tomcat (6.0.36) to host the web app and have deployed the gerrit.war to tomcats webapp folder, setup the context.xml, web.xml etc for the web app correctly i believe. However when i startup Tomcat using the $CATALINA_HOME/bin/startup.bat i get the following message in the tomcat logs: *Dec 07, 2012 1:03:54 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class com.google.gerrit.httpd.WebAppInitializer com.google.inject.CreationException: Guice creation errors:* 1) No SSH keys under C:\Gerrit\config\etc while locating com.google.gerrit.sshd.HostKeyProvider at com.google.gerrit.sshd.SshModule.configure(SshModule.java:90) I have created a is_rsa.pub SSH key and placed it in the specified directory to no avail. I have been googling this for about a week now and can't seem to find any information about the file or format it is expecting... documentation on setting gerrit up on windows seems hard to come by! Can anyone provide useful information about how to correctly configure a host SSH key in this context?

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • cron doesn't execute any task, but writes into log as executed

    - by FractalizeR
    I have strange problem on one of my servers. Cron does not execute any task, but it writes to it's log, that task has been executed successfully. Like some simulation mode is activated... Apr 30 03:03:08 nd-10049 crond[13387]: (root) CMD (php /usr/local/frb/backup.php) Apr 30 03:05:01 nd-10049 crond[13397]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:09:01 nd-10049 crond[19108]: (root) CMD (/etc/webmin/cron/tempdelete.pl ) Apr 30 03:10:01 nd-10049 crond[19467]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:14:44 nd-10049 crontab[21154]: (root) BEGIN EDIT (root) Apr 30 03:15:01 nd-10049 crond[21309]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) REPLACE (root) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) END EDIT (root) Apr 30 03:16:01 nd-10049 crond[14961]: (root) RELOAD (cron/root) Apr 30 03:20:02 nd-10049 crond[22620]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php) There are no errors about cron in common log (messages). The OS is CentOS. What can I do to check what is the problem? What can the problem be?

    Read the article

  • SuPHP custom php.ini doesn't get read

    - by Mathieu Dumoulin
    Took me about 4 hours to get a FastCGI + SuPHP running off Ubuntu 11.10 and i'm now happy that it works mighty fine except for ONE big problem. Custom php.ini's don't seem to load. I tried changing some options and then firing off a phpinfo() and nothing changes in the phpinfo() which leads me to think that there is definitely a problem with the loading of the configuration file. <IfModule mod_suphp.c> AddHandler x-httpd-php .php <Location /> SuPHP_AddHandler x-httpd-php </Location> suPHP_ConfigPath /home/mdumoulin/Documents/tests/tests suPHP_Engine on </IfModule> As you can see, i took great care in making sure i wasn't referencing the php.ini file itself but the directory of the vhost. In the php.ini located in "/home/mdumoulin/Documents/tests/tests/php.ini", you can find: [PHP] error_reporting = E_ALL & ~E_DEPRECATED & ~E_NOTICE display_errors = Off And the log in /var/log/suphp/suphp.log doesn't contain anything relevant, (only old errors that occured before this post while i was testing suphp... So i'm stumped there, dunno what more i can do! Anyone got an idea? EDIT: FINALY, got time to work on this, i disabled FCGI and only enabled SuPHP but after restarting i still see "Server API: CGI/FastCGI". Is this what i should be getting or not? I believe that it's normal i get CGI since SUPHP works with a CGI... But i'm not too sure anymore...

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Fresh Proxmox VE 2.1 installation with defaults can't be reached or pinged

    - by Damainman
    I am using the lastest Proxmox VE 2.1. My server has two NICS with a uplink only connected into eth0. My Server is a co-located server utilizing public IPv4 IPs. It is not behind a firewall or any system which monitors traffic. Via IPKVM I did a fresh install of Proxmox, I put in the correct IP, Mask, Gateway, and DNS information. The install went perfectly fine with no errors. Upon completion and rebooting the system: I am unable to reach the web GUI via the browser, it just times out. I am unable to ping the server. I am unable to ping outside to the Internet from within the server. Tried pinging out to 4.2.2.2 and yahoo.com I tried rebooting the server and restarting the network service. IFCONFIG shows my IP information under vmbro0 which also has the same MAC address as the eth0 device. eth0 only displays a IPv6 Scope:Link address, which I did not setup myself. This is my first time installing proxmox, but after searching for a few hours it doesn't seem like anyone else is having the same issue as me from a fresh install with just the defaults. So far the only thing I did was install it. Also, I know the network cable is good and the IP is good because I was running a Xen XCP server with the same network settings prior to wiping it to install proxmox. Some additional information: for pveversion -v (Installed proxmox-ve_2.1-f9b0f63a-26.iso) pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a) running kernel: 2.6.32-11-pve proxmox-ve-2.6.32: 2.0-66 netstat -nr (note: .136 is my network, and .137 is my gateway) Destination - Gateway - Genmask xxx.xxx.xxx.136 - 0.0.0.0 - 255.255.255.248 0.0.0.0 - xxx.xxx.xxx.137 - 0.0.0.0 /etc/network/interfaces auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address xxx.xxx.xxx.138 netmask 255.255.255.248 gateway xxx.xxx.xxx.137 bridge_ports eth0 bridge_stp off bridge_fd 0

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • Excel not properly recalculating values

    - by gms8994
    I have an excel sheet with values in it (this sheet is generated by a custom perl script, but I don't think that's where the problem lies). In it, I have a formula: =sum(indirect(concatenate(address(6,column()),":",address(17,column())))) The purpose of this formula is to give me the SUM() of the cells in the current column, between rows 6 and 17. In Gnumeric Spreadsheet, as soon as I open the file, this works. But in Excel (both 2003 and 2007), opening the file gives #VALUE! errors in the fields with this formula, stating that the INDIRECT call with the values $B$6:$B$17 will result in an error. Here's the kink in the issue. If I edit the field (via F2), and make no changes, and hit enter, the values update. Also, it seems, if I save the file as .xlsx (Excel 2007 format), the values update upon opening. Unfortunately, I'm not sure that creating an xlsx is a possibility with the modules that I'm using, and many of our clients probably wouldn't be able to use it anyway. Any suggestions? Editing 200+ files every month for each client isn't going to be feasible, so if there's something I'm missing, please let me know.

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Shadow copy referencing invalid volume from symboliclink

    - by ccook
    I recently replaced my motherboard after the last one failed (was shorting and causing random reboots). I'm sure this was not healthy for the machine, and that a clean install would do wonders, but I'd like to fix the current install. That aside, I've been tracking down a pair of errors in the application log. Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details IVssSnapshotProvider::QueryVolumesSupportedForSnapshots(ProviderId,29,...) [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. ]. Operation: Query volumes supported by this provider Context: Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5} Snapshot Context: 29 Followed by Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\'. hr = 0x80070020, The process cannot access the file because it is being used by another process. This error is reproducible at command line, creating the two event log entries C:\Windows\system32>vssadmin list volumes vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Error: The shadow copy provider had an unexpected error while trying to process the specified command. Using WinObj from Sysinternals, I have tracked down the global object. '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\' - SymbolicLink - '\Device\HarddiskVolume8' Running DISKPART, and running the command "list volume" within it lists volumes 0 through 6, there is not a HarddiskVolume8. How can I remove this reference to HarddiskVolume8, and get shadow copy up and running?

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • User directive in nginx generates error despite running as UID root

    - by Joost Schuur
    I'm running nginx on a MacOS X machine, installed with brew, and when I launch nginx, even with sudo, I get the following warning in my log file over and over again: 4/21/11 2:03:42 AM org.nginx[3788] nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/conf/nginx.conf:2 From nginx.conf: user jschuur staff; I'm already launching nginx with sudo, since I want the thing to listen on port 80. Shouldn't that be enough to give it the proper super user privileges? The nginx binary as it's installed: jschuur@Glenna:sbin ? master ls -la total 4544 drwxr-xr-x 3 jschuur staff 102 Apr 12 20:53 . drwxrwxr-x 15 jschuur staff 510 Apr 12 15:25 .. -rwxr-xr-x 1 jschuur staff 2325648 Apr 12 20:39 nginx FWIW, I recompiled the binary to set passenger up and moved it around from it's original location into /usr/local/sbin. Update: As it turns out MacOS X was restarting nginx after I'd stopped it, because the launchd plist in ~/Library/LaunchAgents had set it to 'KeepAlive'. However, because I installed this plist into my local user's LaunchAgents folder as opposed to /Library/LaunchAgents (or better yet /Library/LaunchDaemons, which run before you even log on), it wasn't executed as root. Because of an error about not having permissions to use port 80, it actually exited right away, but still wrote to the same log file as the nginx process I started with sudo. I had thought the errors stemming from the automatic restart were actually coming from my manual restart via sudo. So, bottom line, problem solved. The real problem here was the homebrew instructions specifically asking you to install the plist file into an area that wouldn't allow a local site to use port 80.

    Read the article

  • Cannot click send button in Outlook (+ Exchange) for unknown addresses

    - by Graphain
    Hi, I have a very unusual problem. I have Outlook 2010 connected to Exchange 2010. This can send emails perfectly to known addresses (that is, addresses in the address book or ones that have been sent to previously). However, if I put in an address that is unknown, I cannot actually click the Send button in Outlook. (it simply does nothing). Corresponding to this I get errors in the Event Log for each Send click stating "The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action.". However, Outlook shows as connected the whole time, pings do not break, and I have no reason to suspect it has lost connection. To further complicate matters, Outlook is fine on all other PCs, and this was all perfect until I installed BitDefender on the PC in question and the Exchange Server. Outlook was still fine on these other PCs while BitDefender was installed, but I have removed it from the PC in question and the Server just in case (no success). Summary: Outlook encounters Exchange connectivity issues when sending to unknown (new) email addresses that prevent the Send button actually working at all. This is isolated to one machine and occurred after installation of AV/Firewall software which has since been thoroughly removed. If you have any potential solutions I'd love to hear them, as I will be resorting to reformatting the PC in question, and probably removing Exchange because I'm sick of its issues if I cannot resolve this soon. Big thanks for any help.

    Read the article

  • "unrecognized options" while installing php

    - by user1692333
    I want to compile php 5.4.8 on my mac 10.8.2, but get some errors which cant solve by my self, so need your help. Firstly i get default php options with php -i | head, after it do this command ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --disable-dependency-tracking --sysconfdir=/private/etc --with-apxs2=/usr/sbin/apxs --enable-cli --with-config-file-path=/etc --with-libxml-dir=/usr --with-openssl=/usr --with-kerberos=/usr --with-zlib=/usr --enable-bcmath --with-bz2=/usr --enable-calendar --disable-cgi --with-curl=/usr --enable-dba --enable-ndbm=/usr --enable-exif --enable-fpm --enable-ftp --with-gd --with-freetype-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-jpeg-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-png-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --enable-gd-native-ttf --with-icu-dir=/usr --with-iodbc=/usr --with-ldap=/usr --with-ldap-sasl=/usr --with-libedit=/usr --enable-mbstring --enable-mbregex --with-mysql=mysqlnd --with-mysqli=mysqlnd --without-pear --with-pdo-mysql=mysqlnd --with-mysql-sock=/var/mysql/mysql.sock --with-readline=/usr --enable-shmop --with-snmp=/usr --enable-soap --enable-sockets --enable-sqlite-utf8 --enable-suhosin --enable-sysvmsg --enable-sysvsem --enable-sysvshm --with-tidy --enable-wddx --with-xmlrpc --with-iconv-dir=/usr --with-xsl=/usr --enable-zend-multibyte --enable-zip --with-pcre-regex --with-pgsql=/usr --with-pdo-pgsql=/usr But get this error config.status: creating Makefile config.status: creating jconfig.h config.status: jconfig.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands configure: WARNING: unrecognized options: --enable-cli, --with-config-file-path, --with-libxml-dir, --with-openssl, --with-kerberos, --with-zlib, --enable-bcmath, --with-bz2, --enable-calendar, --disable-cgi, --with-curl, --enable-dba, --enable-ndbm, --enable-exif, --enable-fpm, --enable-ftp, --with-gd, --with-freetype-dir, --with-jpeg-dir, --with-png-dir, --enable-gd-native-ttf, --with-icu-dir, --with-iodbc, --with-ldap, --with-ldap-sasl, --with-libedit, --enable-mbstring, --enable-mbregex, --with-mysql, --with-mysqli, --without-pear, --with-pdo-mysql, --with-mysql-sock, --with-readline, --enable-shmop, --with-snmp, --enable-soap, --enable-sockets, --enable-sqlite-utf8, --enable-suhosin, --enable-sysvmsg, --enable-sysvsem, --enable-sysvshm, --with-tidy, --enable-wddx, --with-xmlrpc, --with-iconv-dir, --with-xsl, --enable-zend-multibyte, --enable-zip, --with-pcre-regex, --with-pgsql, --with-pdo-pgsql Maybe someone have some suggestions on this?

    Read the article

  • Rsync: Only preserve meta (time, group, etc) on files and sub-directories, not root directory

    - by Svish
    I am copying some files (all except hidden ones) using rsync from one place to another using this command: rsync -Cav --delete --exclude=.* /Some/Directory/ other-host:/Other/Directory It works nice except that I get the following errors: rsync: chgrp "/Other/Directory/." failed: Operation not permitted (1) rsync: failed to set times on "/Other/Directory/.": Permission denied (13) That is understandable because I do in fact not have those permissions, and I also do not want to change the group of that directory. I only want to do this for all the files and directories that are in that directory. Is there any way to solve this? Tried to --exclude=. and --exclude=./, but those didn't work. Any ideas? I have no idea how to fix this... More details: This is on Mac OS X, and the directories I am syncing is from a local mounted volume to the /Users/Shared/ directory on the other host. That directory has user root and group wheel. The files inside it has user admin and group staff and so does the local source directory.

    Read the article

  • uWSGI cannot find "application" using Flask and Virtualenv

    - by skyler
    Using uWSGI to serve a simple wsgi app, (a simple "Hello, World") my configuration works, but when I try to run a Flask app, I get this in uWSGI's error logs: current working directory: /opt/python-env/coefficient/lib/python2.6/site-packages writing pidfile to /var/run/uwsgi.pid detected binary path: /opt/uwsgi/uwsgi setuid() to 497 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:3031 fd 3 Python version: 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] Set PythonHome to /opt/python-env/coefficient/ *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0xbed3b0 your server socket listen backlog is limited to 100 connections *** Operational MODE: single process *** added /opt/python-env/coefficient/lib/python2.6/site-packages/ to pythonpath. unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode ***` Note in particular this part of the log: unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) **no app loaded. going in full dynamic mode** This is my Flask app: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World, from Flask!" Before I added my Virtualenv's pythonpath to my configuration file, I was getting an ImportError for Flask. I solved this though, I believe (I'm not receiving errors about it anymore) and here is my complete configuration file: uwsgi: #socket: /tmp/uwsgi.sock socket: 127.0.0.1:3031 daemonize: /var/log/uwsgi.log pidfile: /var/run/uwsgi.pid master: true vacuum: true #wsgi-file: /var/www/coefficient/coefficient.py wsgi-file: /var/www/coefficient/flask.py processes: 1 virtualenv: /opt/python-env/coefficient/ pythonpath: /opt/python-env/coefficient/lib/python2.6/site-packages This is how I start uWSGI, from an rc script: /opt/uwsgi/uwsgi --yaml /etc/uwsgi/conf.yaml --uid uwsgi And if I try to view the Flask program in a browser, I get this: **uWSGI Error** Python application not found Any help is appreciated.

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >