Search Results

Search found 20933 results on 838 pages for 'jeff post'.

Page 622/838 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • mod_cache not working

    - by Pistos
    I have a PHP site that has many dynamically generated pages. I'm trying to turn to mod_cache to help boost performance, because in most cases, content does not change in a given day. I have configured mod_cache as best I could, following examples around the web, including the mod_cache page on apache.org. When I set LogLevel debug, I see a bit of information about the caching that is [not] happening. There are plenty of pairs of lines like this: [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(141): Adding CACHE_SAVE filter for /foo/bar [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(148): Adding CACHE_REMOVE_URL filter for /foo/bar Which is fine, because I've set CacheEnable disk /foo, to indicate that I want everything under /foo cached. I'm new to mod_cache, but my understanding about these lines is that it just means that mod_cache has acknowledged that the URL is supposed to be cached, but there are supposed to be more lines indicating that it is saving the data to cache, and then later retrieving them on subsequent hits to the same URL. I can hit the same URL till I'm blue in the face, whether with F5 refreshing, or not, or with different browsers, or different computers. It's always that pair of lines that shows in the logs, and nothing else. When I set CacheEnable disk /, then I see more activity. But I don't want to cache the entire site, and there are many, many different subpaths to the site, so I don't want to have to modify code to set no-cache headers in all the necessary places. I'll mention that mod_rewrite is in use here, rewriting /foo/bar to something like index.php?baz=/foo/bar, but my understanding is that mod_cache uses the pre-rewrite URL, not the post-rewrite URL. As far as I can tell, I have the response headers not getting in the way of caching. Here's an example from one hit: Cache-Control:must-revalidate, max-age=3600 Connection:Keep-Alive Content-Encoding:gzip Content-Length:16790 Content-Type:text/html Date:Fri, 01 Jun 2012 21:43:09 GMT Expires:Fri, 1 Jun 2012 18:43:09 -0400 Keep-Alive:timeout=15, max=100 Pragma: Server:Apache Vary:Accept-Encoding mod_cache config is as follows: CacheRoot /var/cache/apache2/ CacheDirLevels 3 CacheDirLength 2 CacheEnable disk /foo What is getting in the way of mod_cache doing its job of caching?

    Read the article

  • CUPS causes printer to click and doesn't print

    - by Pez Cuckow
    I'm suffering a strange problem with my Cannon iP4850 when trying to use CUPS on a Raspberry Pi (this is not RPi specific, please do not vote to move it). When I plug the printer into my Laptop (OSX) or my Desktop W7 it identifies as a iP4800 and prints perfectly. So I plug it into the Pi (running debian), set it up in CUPS enable sharing and can now see the iP4800 series shared on the network. However if I print to it (using AirPrint etc...); the file gets to CUPS safely (shows in the queue) but when it tries to print the printer clicks (like a loud thunk) 3/4 times and then gives in, with a double amber flashing light. In cups it shows as job completed. Do you know why using the pi and cups would cause what appears to be a hardware fault and what I can do to fix the problem or to provide further debug info? Thanks for your time! Description: Canon iP4800 series Location: Lounge Driver: Canon PIXMA iP4800 - CUPS+Gutenprint v5.2.9 (color, 2-sided printing) Connection: usb://Canon/iP4800%20series?serial=2239B2 Note: I've tried deleting and re-adding the printer to the Laptop, Desktop and PI and the results are always the same Log for plugging in printer and printing (attempting to) something until the printer turned off again pi@pezpi /var/log $ dmesg [ 7284.176336] usb 1-1.2: new high speed USB device number 8 using dwc_otg [ 7284.279703] usb 1-1.2: New USB device found, idVendor=04a9, idProduct=10d5 [ 7284.279750] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7284.279771] usb 1-1.2: Product: iP4800 series [ 7284.279786] usb 1-1.2: Manufacturer: Canon [ 7284.279800] usb 1-1.2: SerialNumber: 2239B2 Setting cups to verbose: Change loglevel in cupsd.conf to debug (or debug2) pi@pezpi /var/log $ sudo vim /etc/cups/cupsd.conf pi@pezpi /var/log $ sudo /etc/init.d/cups restart [ ok ] Restarting Common Unix Printing System: cupsd. pi@pezpi /var/log $ Log from $ /var/log/cups/error_log is at http://pastebin.com/7VZMRMrG (too large to post here) The log contains - in order (deleted the log and then did the beneath) Restarting the cups server Attempting to print a test page x2 Printing from 192.168.1.90 via AirPrint Printing from 192.168.1.90 via Network Print Turning the printer off and on again

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • Blinking power button

    - by Mike Ramsey
    A friend asked me to look at his Gateway DX4640 desktop. When he presses the power button, power goes to the mobo (NVIDIA nForce 630i MCP73PV, GeForce 7100 chipset) and the CPU fan starts spinning. The power button slowely blinks on and off (blue) and the screen briefly says no signal and then goes black. And nothing else; no post code beeps. My initial two conjectures were: 1. Vista was stuck in sleep/hibernation mode, or 2. A power off had left the mobo in a bad state. The fix for both is to: a) Unplug the AC power cord b) hold the power button for 30 second to fully discharge the mobo It didn't help. I left the system unplugged from AC power for an hour. No change. I am out of ideas. Has anybody seen anything like this before? What does a blinking blue power button mean? How can I get more data points to guide trouble shooting? --Thank you, --Mike

    Read the article

  • Error - "IR Hardware not detected" - but it's installed/working

    - by Robert
    I am trying to do: Settings-TV-Set up TV signal. During this process I am getting the error "IR Hardware not detected." With the remote, I can select the "try again" button (to re-detect) and it tries again, so the remote works. Plugging in the "IR blaster" doesn't change anything. (I wouldn't expect any difference, but I read a post which said you needed that. I will get Media Center to change channels if I can get that working - but first things first.) I was able to do the setup months ago when I had cable. and everything was fine. I just got DirecTV. (BTW - During the above process, Media Center detects the signal coming in on channel 3. Windows XP Media Center SP3. The TV Tuner card is a Pinnacle TCTV HD PCI. Everything - and I mean everything - has the latest firmware and drivers - as of 4 months ago when I fixed a different problem. So I DON"T WANT TO HEAR the standard answer to check drivers/firmware. THANK YOU.) Thanks for any help.

    Read the article

  • Is there a Windows 7 compatible IPSec VPN client that allows protocol and port specific rules?

    - by Sani Huttunen
    As the title says, I need to find a IPSec VPN client for Windows 7. On XP and Vista we've used SafeNet SoftRemote in which you can set up rules for specific protocols and ports. But SoftRemote isn't compatible with Windows 7. 172.xxx.xxx.1 TCP 1433 172.xxx.xxx.2 TCP 1433 172.xxx.xxx.10 ALL ... Since the VPN gateway is configured this way the client must mirror these settings. I've tried TheGreenBow, NCP Secure Entry, Cisco VPN Client and Shrew Soft VPN but none of these allows you to configure by protocol and port. Does anyone have any other suggestions? EDIT: Forgot to mention that agressive mode is also a requirement. --UPDATE-- I've got some news... I've managed to get SoftRemote to work on Windows 7 x64 through Windows XP Mode. After scouring all corners of the Internet for idéas I had enough information to construct a working solution. This solution will probably benefit other clients as well! You'll find a post here with detailed instructions of how I went about.

    Read the article

  • Debian, Apache2, CGI: paths issue

    - by Bubnoff
    I have a perl form email script on the servers cgi-bin directory ( /usr/lib/cgi-bin ). /etc/apache2/sites-enabled/000-default ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> The issue is with paths. html calls script here: <form name="Request" method="post" action="http://server-test.local/cgi-bin/formprocessorpro.pl" onsubmit="return checkWholeForm49874(this)"> The directory with the templates and configs is passed here: <input type="hidden" name="base_path" value="../contact" /> The path to this form is: http://server-test.local/formstest/contact.htm No matter what variation I try for the base_path I get an error from the formprocessor script that it can't find the directory: An error occurred when opening the Form Configuration File (../contact/form.cfg): No such file or directory. I need to move this script from an old server, configured by a previous sysadmin, to a new server. Since cgi-bin is automatically linked to /usr/lib/cgi-bin and linked such that the script resides: http://server-test.local/cgi-bin/formprocessorpro.pl I would imagine that, given that the templates are in the webroot in a directory called contact, the correct path would be: ../contact Any ideas? It's been awhile since I've messed with CGI. Bubnoff

    Read the article

  • "Password Server: Stopped" on Mac OS Lion Server. Stops with error -1 during startup

    - by V1ru8
    Since I've restored the Open Directory from an archive because my Server crashed and the DB was corrupt. The password server does not start anymore. The log looks like this: Feb 14 2012 21:41:20 156746us Mac OS X Password Service version 376.1 (pid = 2438) was started at: Tue Feb 14 21:41:20 2012. Feb 14 2012 21:41:20 156801us RunAppThread Created Feb 14 2012 21:41:20 156852us RunAppThread Started Feb 14 2012 21:41:20 156879us Initializing Server Globals ... Feb 14 2012 21:41:20 163094us Initializing Networking ... Feb 14 2012 21:41:20 163196us Initializing TCP ... Feb 14 2012 21:41:20 191790us SASL is using realm "SERVER.HOME.POST-NET.CH" Feb 14 2012 21:41:20 191847us Starting Central Thread ... Feb 14 2012 21:41:20 191860us Starting other server processes ... Feb 14 2012 21:41:20 191873us StartCentralThreads: 1 threads to stop Feb 14 2012 21:41:20 191905us Initializing TCP ... Feb 14 2012 21:41:20 191954us Starting TCP/IP Listener on ethernet interface, port 106 Feb 14 2012 21:41:20 192012us Starting TCP/IP Listener on ethernet interface, port 3659 Feb 14 2012 21:41:20 192048us Starting TCP/IP Listener on interface lo0, port 106 Feb 14 2012 21:41:20 192082us Starting TCP/IP Listener on interface lo0, port 3659 Feb 14 2012 21:41:20 192117us StartCentralThreads: Created 4 TCP/IP Connection Listeners Feb 14 2012 21:41:20 192132us Starting UNIX domain socket listener /var/run/passwordserver Feb 14 2012 21:41:20 193034us CRunAppThread::StartUp: caught error -1. Feb 14 2012 21:41:20 193056us ** ERROR: The Server received an error during startup. See error log for details. Feb 14 2012 21:41:20 193075us RunAppThread::StartUp() returned: 4294967295 Feb 14 2012 21:41:20 193107us Stopping server processes ... Feb 14 2012 21:41:20 193119us Stopping Network Processes ... Feb 14 2012 21:41:20 193131us Deinitializing networking ... Feb 14 2012 21:41:20 193149us Server Processes Stopped ... Feb 14 2012 21:41:20 193165us RunAppThread Stopped Feb 14 2012 21:41:20 193202us Aborting Password Service. See error log. The error log repeats the following: Feb 14 2012 21:41:50 409022us Server received error -1 during startup. Feb 14 2012 21:41:50 409141us Aborting Password Service. Anyone an idea what's wrong here and how I can fix this?

    Read the article

  • Ubuntu 12.04 - HP ProLiant DL380 G4 - Load Maxes Out / Unresponsive

    - by Brian S
    Trying to post this question on here. I've posted it on the Ubuntu forums as well with no replies. Recently I upgraded an HP ProLiant DL380 G4 server from Ubuntu 10.04 server to Ubuntu 12.04 server. Upon doing so, the server will not - at random times - get to a load of 400+ and then become completely non-responsive. I use an SNMP graphing program (cacti) and the load steadily increases by about 10 every five minutes until it gets over 400 and the graphing stops. The graphs may not be accurate, but the CPU load averages about 3% before this happens - and right when the load starts increasing, it jumps to about 25% for 15 minutes and dramatically dips down to less than 1% (about 0.3%) until the graphing stops. I'm not able to open a SSH tunnel to the server to do anything. I've checked the /var/log/syslog and all logging stops at that time as well - with nothing else in there. The odd thing is - the server still responds to DNS queries for the zones it is authoritative on during this time - and at normal speed. Just not sure what the next step would be in order to find out what is going on - and how this issue can be corrected. The server cannot stay with Ubuntu 10.04 Server and needs to stay upgraded.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • YUM error. Is this a cert error

    - by Julia Roberts
    Nov 13 13:38:57 host abrt: detected unhandled Python exception in '/usr/bin/yum' Nov 13 13:38:57 host abrtd: New client connected Nov 13 13:38:57 host abrt-server[3508]: Saved Python crash dump of pid 3151 to /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151 Nov 13 13:38:57 host abrtd: Directory 'pyhook-2012-11-13-13:38:57-3151' creation detected Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-former Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-release Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-rhx Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Nov 13 13:38:57 host abrtd: Package 'yum' isn't signed with proper key Nov 13 13:38:57 host abrtd: 'post-create' on '/var/spool/abrt/pyhook-2012-11-13-13:38:57-3151' exited with 1 Nov 13 13:38:57 host abrtd: Corrupted or bad directory /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151, deleting There is also nothing in the crash dump file. Ideas? yum update Loaded plugins: fastestmirror, rhnplugin, security An error has occurred: Internal Server Error See /var/log/up2date for more information Is yum broken

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Domain names timing out after VPS IP change

    - by Fourjays
    I rent a CentOS 5 VPS from a UK-based provider, with DirectAdmin also installed. On Thursday night, they carried out planned maintenance to changed the two IPs I had been assigned to two new ones. On Friday, after the change had taken place, I updated my domain name records to reflect the IP change. Since then, all of the domains pointing to the VPS are timing out. Additionally, DirectAdmin was also not responding, but was was resolved by running the ipswap scripts as found in the DirectAdmin knowledgebase. It did not fix my domains though. I have contacted the VPS provider but I have been waiting for a response for some time now. I have checked again, and again, and all the IPs referenced in DirectAdmin are correct. If I go to the server IP in my browser it responds with "Apache is functioning normally." Email accounts on the server are also functioning correctly. But if I access a domain itself, it times out. Running a ping and a DNS look-up, I can confirm the nameserver IPs are correct. If I run a trace route it reaches an IP that is similar to my VPS IPs (last 2 blocks are different) before timing out (it never shows my server IP). I am relatively new to VPS management so don't have a vast wealth of experience with troubleshooting problems on them. I have checked all of the httpd configuration files, which don't seem to have any IP references in them at all. Looking in the Apache error logs, what errors there are do not coincide with times I have tried to access the site. Is this issue at my provider's end? Is there anything else I can check or test, to rule out post-IP-change problems with my server configuration? It was all running fine prior to the IP change.

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • Redmine plug-in fails at rake db:migrate_plugins

    - by Drew
    Hey, First post, so hope I'm in the right place. While trying to install the Redmine plug-in 'Wiki Extensions', I keep getting stuck when I try to run the "rake db:migrate_plugins RAILS_ENV=production" command. I am moving server and I'm in bit over my head. Haven't found anything on Google that has helped me much, though I might have missed something. I have pasted in the output with --trace: (in /srv/www/vastpark.org/redmine) ** Invoke db:migrate_plugins (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate_plugins Migrating engines... Migrating acts_as_activity_provider... Migrating acts_as_attachable... Migrating acts_as_customizable... Migrating acts_as_event... Migrating acts_as_list... Migrating acts_as_searchable... Migrating acts_as_tree... Migrating acts_as_versioned... Migrating acts_as_watchable... Migrating awesome_nested_set... Migrating classic_pagination... Migrating coderay-0.9.2... Migrating gravatar... Migrating open_id_authentication... Migrating prepend_engine_views... Migrating redmine_wiki_extensions... == CreateWikiExtensionsComments: migrating =================================== -- create_table(:wiki_extensions_comments) rake aborted! An error has occurred, all later migrations canceled: Mysql::Error: Table 'wiki_extensions_comments' already exists: CREATE TABLE 'wiki_extensions_comments' ('id' int(11) DEFAULT NULL auto_increment PRIMARY KEY, 'wiki_page_id' int(11), 'key_word' varchar(255), 'user_id' int(11), 'comment' text, 'created_at' datetime, 'updated_at' datetime) ENGINE=InnoDB

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • hard drive recognized by bios but not by windows

    - by tehgeekmeister
    I'm adding a new hard drive (A seagate ST31000340NS; I had links in here but I don't have enough reputation to post them. Interestingly, the bios recognizes it as a ST31000340AS, but it was bought as the other number...) to a friend's hp pavilion d4650e (mobo specs; google the model if you want the rest of the info, can't do more than one link.). Have had a hell of a time with it. Finally figured out that the hard drive needed a jumper set to limit the speed to 1.5gbps so the mobo would recognize it, and the bios DOES recognize it now. But not windows (using windows 7), using add new hardware or diskmgmt.msc. According to my friend, who was at the computer when it first booted after adding the jumper, a new hardware found dealio popped up saying something about raid, but I can't provide more info then that since I didn't see it. Ubuntu livecd recognized the drive before we changed the jumper. Haven't checked since then. XP didn't recognize it, that's the OS we started with. Upgraded to 7 hoping it might fix the problem. The only other info I can think of that might be immediately relevant is that the drive is plugged into the fifth sata channel, and the first channel is empty. Is this a problem? I assume not, because the two other drives (in a raid 0) and the cd and dvd drives are also on channels past the first one, and are recognized. Ask questions and I'll update with info!

    Read the article

  • System freezes during boot process

    - by slugster
    Hi everyone, i have a machine running Win7 Ultimate. It was running fine, then it just froze - all the stuff i was doing was still on the screen, but mouse and keyboard input was ignored, any animation that was happening on the screen stopped, the machine literally just froze. So i rebooted (power off button), from then on the machine will reboot, but it ultimately freezes again. The instance when this happens will vary - i have made it as far as the Windows login screen, but mostly it will do the POST, then give me the option to press F1 to continue or Del to enter BIOS settings (but of course pressing a key has no effect - it's frozen!). I have disconnected everything not necessary for the boot process, the only peripheral that remains attached is the keyboard. (even the network cable is disconnected). Prior to this the machine was operating fine. The install of Win7 is only 2 days old, and it was a fresh reinstall (i.e. not an upgrade or repair). Can anyone give me an indication of what may be wrong here? I'm not sure if this question should be here or on SuperUser, please migrate it if i have chosen the wrong board.

    Read the article

  • How to pipe internet radio into a tuner?

    - by JW
    UPDATE: Thanks everyone for the ideas! This was an area I knew very little about but now I can talk with a little more expertise about it. Much appreciated! Visited my dad this weekend and he wants to pipe some internet radio he's found down to a tuner on quite a distance away in the house. He uses computers for only very basic things: e-mail, getting the Post crossword, checking Yahoo!, checking recipes, etc. There's currently one computer in the house (no router). My initial suggestion (without any research whatsoever) was to get a wireless router and a netbook for downstairs near the tuner, but he initially wasn't too keen about having another computer down there. Anyway, is there any computer hardware that could magically pipe the audio output from the computer down to one set of (RCA) audio inputs on the tuner? Wireless isn't necessary but it probably would be easier. Anyway, thanks for your suggestions! UPDATE Thanks everyone! Voted up all of your suggestions now that I have 15 rep. Much appreciated.

    Read the article

  • Can I use IIS to do ActiveDirectory single-sign-on for another website?

    - by brofield
    I'm trying to add Active Directory single-sign-on support to an existing SOAP server. The server can be configured to accept a trusted reverse-proxy and use the X-Remote-User HTTP header for the authenticated user. I want to configure IIS to be the trusted proxy for this service, so that it handles all of the Active Directory authentication for the SOAP server. Basically IIS would have to accept HTTP connections on port X and URL Y, do all the authentication, and then proxy the connection to a different server (most likely the same X and Y). Unfortunately, I have no knowledge of IIS or AD (so I am trying my best to learn enough to build this solution) so please be gentle. I would assume that this is not an uncommon scenario, so is there some easy way to do this? Is this sort of functionality built into IIS or do I need to build some sort of IIS proxy program myself? Is there a better option for getting the authentication done and the X-Remote-User HTTP header set than requiring IIS? Update: For example, what I am trying to create is: [CLIENT] [IIS] [AD] [SOAP-SERVER] 1. |---------------->| 2. |<--------------->|<---------->| 3. |--------------------------->| 4. |<---------------------------| 5. |<----------------| 1. POST to http://example.com/foo/bar.cgi 2. Client is not authenticated, so do authentication 3. Once validated, send request to server (X-Remote-User: {userid}) 4. Process request, send response 5. Forward response to client I need to know how to configure IIS to do the automatic authentication of the user using AD, and then to proxy the request to the actual server, sending the userid in the X-Remote-User HTTP header.

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >