Search Results

Search found 12017 results on 481 pages for 'no root'.

Page 386/481 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Where is '/host' declared for mount in Wubi (Ubuntu 9.10)?

    - by Pedro
    Hi! I'm using Wubi (ubuntu 9.10), and I couldn't find where '/host' mountpoint is declared for mounting. There's no entry in fstab, but it's listed in /proc/mount and mounted at boot time. Any ideas? pedroel@ubuntu:~$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,relatime,mode=755 0 0 /dev/sda1 /host fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 /dev/loop0 / ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 none /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 /dev/loop1 /home/pedroel/Downloads ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 gvfs-fuse-daemon /home/pedroel/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/mapper/isw_efhafcifi_RAID_Volume01 /media/RAID_D fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 pedroel@ubuntu:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 /host/ubuntu/disks/root.disk / ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/pedro.disk /home/pedroel/Downloads ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/swap.disk none swap loop,sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks in advance, Pedro

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • fail2ban regex working but no action being taken

    - by fpghost
    I have the following snippet of fail2ban configuration on Ubuntu 13.10 server: #jail.conf [apache-getphp] enabled = true port = http,https filter = apache-getphp action = iptables-multiport[name=apache-getphp, port="http,https", protocol=tcp] mail-whois[name=apache-getphp, dest=root] logpath = /srv/apache/log/access.log maxretry = 1 #filter.d/apache-getphp.conf [Definition] failregex = ^<HOST> - - (?:\[[^]]*\] )+\"(GET|POST) /(?i)(PMA|phptest|phpmyadmin|myadmin|mysql|mysqladmin|sqladmin|mypma|admin|xampp|mysqldb|mydb|db|pmadb|phpmyadmin1|phpmyadmin2|cgi-bin) ignoreregex = I know the regex is good, because if I run the test command on my access.log: fail2ban-regex /srv/apache/log/access.log /etc/fail2ban/filter.d/apache-getphp.conf I get a SUCCESS result with multiple hits, and in my log I see entries like 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 301 585 "-" "-" 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 301 593 "-" "-" Secondly I know email is configured correctly, as each time I service fail2ban restart I get an email for each of the filters stopping/starting. However despite all this no action seems to be taken when one of these requests comes in. No email with whois, and no entries in iptables. What possibly could be preventing fail2ban from taking action? (everything looks in order in fail2ban-client -d and I can see the chains have loaded with iptables -L)

    Read the article

  • Missing MB on a GPT partioned SSD

    - by pisswillis
    I recently installed Arch Linux on an Intel 40GB SSD. I used GPT for partioning (via GNU parted) and created the following partions: /dev/sda1 : 1 MB, no FS, flag=bios_grub /dev/sda2 : 30MB, /boot, ext2, flag=boot /dev/sda3 : 20GB, /home, ext4 /dev/sda4 : ~20GB, /, ext4 After struggling to install grub2 from the livecd environment (which I finally did via grub-install /dev/sda --root-directory=/mnt/ --no-floppy --force) I got a working system. However, when I was inspecting disk usage with df I noticed that my home partition had around 170MB of used space on it. This surprised me because the only things on /home were one users .bashrc, .bash_history, and .lesshst. du confirmed that there was only a few KB of space being used on /home. Why does df report approximately 170MB being used when du does not? Is this space "gone forever", or can I regain it by repartioning and/or reinstalling? When I installed grub2 it said something along the lines of "your embed area is too small", and that I could "use BLOCKLISTS, but BLOCKLISTS are UNRELIABLE". In the end the only way I could get a system booting from the SSD was to use blocklists via the grub-install --force flag. Is this related to the mysterious missing 170MB? Thanks

    Read the article

  • Where is my problem? The P6X58D Premium Mobo, Windows 7, or other?

    - by Dylan Yaga
    I was having problems with my USB devices for an hour last night, and I am unable to determine the root cause of the problem. The two symptoms are: At seemingly random times (not consistently spaced by time or caused by any detectable event) my USB devices become "detached". Windows will play the USB disconnect sound and then the reconnect sound. The devices disconnected and then reconnected. My USB Keyboard will "stick" on one key for several seconds before processing any other keystroke made. The mouse also does not respond to clicks. I do not lose mouse movement or USB device connectivity. And after a moment of this several beeps will be emitted from the speakers. Hardware Specs: GFX Card: EVGA GeForce GTX 470 Superclocked 1280MB DDR5 PCIe Motherboard: ASUS P6X58D Premium Intel X58 Socket LGA1366 MB Processor: Intel Core i7-920 2.66Ghz 8M LGA1366 CPU Memory: Corsair Dominator 6144MB PC12800 DDR3 Storage: Hitachi 1TB Serial ATA HD 1600MHz 7200/32MB/SATA-3G Cooling: Corsair Hydro H50 CPU Liquid Cooler Case: Corsair Obsidian 800D Full Tower Case Power Supply: Corsair HX1000W 1000W Modular Power Supply Steps I have taken to narrow down the problem: Restarted the computer. - No change Changed USB port the Hub was connected to on the CPU. - No change Removed all devices from USB Hub and connected directly to CPU. - No change Used a different USB keyboard both in USB Hub and directly to CPU. - No change Disconnected and reconnected all cables. - No change Disassembled the Tower and determined if the USB headers were firmly connected. - No change Checked device manager for errors. Checked all USB devices. - Nothing flagged After an hour of frustration trying to narrow down the problem it appeared to disappear. But I am torn between it being a Mobo problem or an OS problem. Is there anything else I can do to narrow down the problem before a reformat and then eventually exchanging the Mobo?

    Read the article

  • Server getting overloaded

    - by taras
    Hi, I have 2 setups tomcat5.5.20 on Redhat and mysql 4.1.22 on another Redhat server. Recently my webserver started getting overloaded up to 80-90%. After checking i found repeating errors(each seconds) in catalina.out. Can it cause the server overload or where else can be the root of the problem ? catalina.out: DBCP object created 2010-12-22 13:33:12 by the following code was never closed: java.lang.Exception at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.init(AbandonedTrace.java:96) i have to restart tomcat once a day when server load reaches 80-90%. Also catalina.out file is growing too fast which every few hours need to clear the logs. My datasource config: <bean id="myDataSource" class="org.apache.tomcat.dbcp.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName"> <value>com.mysql.jdbc.Driver</value> </property> jdbc:mysql://XXX/XXX?autoReconnect=true 20 20 <property name="maxIdle"> <value>50</value> </property> <property name="maxActive"> <value>50</value> </property> <property name="removeAbandoned"> <value>false</value> </property> <property name="removeAbandonedTimeout"> <value>2400</value> </property> <property name="username"> <value>XXX</value> </property> <property name="password"> <value>XXX</value> </property> </bean> Thanks for any direction.

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • WMI Sensors monitoring

    - by DmitrySemenov
    Monitoring tool Paessler stopped to monitor WMI Windows Sensors Paessler is Updated to version 12.4.5.3165. (10/30/2012 1:44:11 PM) Paessler windows sensors (against windows server 2008 R2 web edition) stopped to work (no changes have been made on server that we monitor) with the message Connection could not be established (80070005: Access is denied - Host: 192.168.2.10, User: Administrator, Password: **, Domain: ntlmdomain:) (code: PE015) However if I go to Virtual machine used to run Paessler and the following cscript runs successfully: strComputer = "192.168.2.10" Set objSWbemLocator = CreateObject("WbemScripting.SWbemLocator") Set objSWbemServices = objSWbemLocator.ConnectServer _ (strComputer, "root\cimv2", _ "Administrator", "pass") Set colProcessList = objSWbemServices.ExecQuery( _ "Select * From Win32_Processor") For Each objProcess in colProcessList Wscript.Echo "Process Name: " & objProcess.Name Next I'm getting output C:\>cscript test.vbs Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz So WMI works a. I gave Administrator credentials for Device to monitor in Paessler setting, the same I used in the script above b. I restarted windows server (broken sensors) - but this didn't help c. I restarted Paessler probe service - no effect any ideas?

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • Set usergroup and persmissions ftp folder back to default

    - by OrangeTux
    I tried to create a new ftp user via the commandline. But I did something wrong and now I can access the server via FTP but I can't see any files. It doesn't make any sense wich user I'm using. ls -la drwxr-xr-x 13 root ftp 4096 2012-03-30 09:47 . drwxr-xr-x 7 web6 ftp 4096 2012-03-26 09:28 .. drwxr-xr-x 4 web6 ftp 4096 2012-03-26 13:31 actions drwxr-xr-x 2 web6 ftp 4096 2012-03-26 11:46 bin -rwxr-xr-x 1 web6 ftp 1520 2012-03-24 23:32 changelog.txt drwxr-xr-x 2 web6 ftp 4096 2012-03-26 13:30 css drwxr-xr-x 8 web6 ftp 4096 2012-03-24 22:43 external -rwxr-xr-x 1 web6 ftp 333 2012-03-26 15:12 .htaccess drwxr-xr-x 3 web6 ftp 4096 2012-02-27 15:07 images -rwxr-xr-x 1 web6 ftp 1606 2012-03-26 21:25 index.php drwxr-xr-x 2 web6 ftp 4096 2012-02-18 13:20 js drwxr-xr-x 2 web6 ftp 4096 2012-02-03 00:34 layout drwxr-xr-x 2 web6 ftp 4096 2012-03-29 23:35 library drwxr-xr-x 2 web6 ftp 4096 2012-03-30 09:47 log -rwxr-xr-x 1 web6 ftp 396 2012-03-24 15:04 menu.php drwxr-xr-x 2 web6 ftp 4096 2012-03-30 12:01 python drwxr-xr-x 2 web6 ftp 4096 2012-03-23 10:51 todo I can't see any dirs and files because I changed the groupowner or I the rights of the groupowner of the ftp dir. How can I set the ownership of the files back to default so I can access the files via FTP again?

    Read the article

  • Deploying ASP.NET MVC to Windows Server 2003

    - by pete the pagan-gerbil
    Hi, I have a problem with an MVC 2 website on Windows Server 2003 running IIS 6. It is externally hosted, but we have a 2003 server internally for testing. The internal server runs the website fine, the external server gives a 403 ("website declined to show this page") error when navigating to the root of the site, and a 404 if I try to navigate directly to a page resource. I have tried the wildcard ISAPI mapping and extension mapping, and a couple of other common checks (I forget exactly which now, most of them were already set correctly), but so far no joy. All the settings can be replicated on our internal server and the pages return properly. IIS logs just show exactly what the browser shows - 404 errors and 403s. I've read about a different level of trust required for an MVC application compared to a WebForms application - how can I check permissions and trust levels on the external and internal servers (assuming I am able to check that) and if that would cause these errors, what are the minimum levels that MVC require? Failing that, what else might be causing this error for me to try out?

    Read the article

  • Installing checkinstall on x86_64 bit

    - by SephMerah
    I downloaded the source for check install. checkinstall-1.6.2.tar.gz. I then tar -xzvf checkinstall-1.6.2.tar.gz Then I make. It prints this error: [root@ip-50-63-180-135 checkinstall-1.6.2]# make for file in locale/checkinstall-*.po ; do \ case ${file} in \ locale/checkinstall-template.po) ;; \ *) \ out=`echo $file | sed -s 's/po/mo/'` ; \ msgfmt -o ${out} ${file} ; \ if [ $? != 0 ] ; then \ exit 1 ; \ fi ; \ ;; \ esac ; \ done make -C installwatch make[1]: Entering directory `/home/sofiane/checkinstall-1.6.2/installwatch' gcc -Wall -c -D_GNU_SOURCE -DPIC -fPIC -D_REENTRANT -DVERSION=\"0.7.0beta7\" installwatch.c installwatch.c:2942: error: conflicting types for 'readlink' /usr/include/unistd.h:828: note: previous declaration of 'readlink' was here installwatch.c:3080: error: conflicting types for 'scandir' /usr/include/dirent.h:252: note: previous declaration of 'scandir' was here installwatch.c:3692: error: conflicting types for 'scandir64' /usr/include/dirent.h:275: note: previous declaration of 'scandir64' was here make[1]: *** [installwatch.o] Error 1 make[1]: Leaving directory `/home/sofiane/checkinstall-1.6.2/installwatch' make: *** [all] Error 2 I searched extensively on this issue and this solution looks promising. Should I attempt to install checkinstall as an fpm? What would be the best way to go about that? Centos 6.3 x86_64

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Sharing a folder with Nautilus and NTFS external drive gets errors

    - by TheLQ
    I am trying to share a folder in Lubuntu over a network that's on an external NTFS drive. Due to the system that I have (rotating backup disks) this is probably the second time that the drive would of been mounted. Its manually mounted with a simple (for example) mount /dev/sdb1 /media/BACKUP On an internal NTFS disk I have successfully setup a network share and can access it. However on the external disk I can't from any other Windows computer. When setting up the share Nautilus said that it needs to change the other's permissions to allow for other users to write. However afterwords its still blank. Changing it to Read and Write just changes back to blank. Chowning the entire /media folder recursively and trying didn't work. Running PCManFM as root and changing didn't work. Adding "public=yes" to smb.conf and restarting didn't work. I'm out of idea's on what to do. What's weird is that it worked just fine on an internal NTFS disk, so why not the external one? Any solutions need to be able to managed inside of a gui (preferably Nautilus) as the person managing the machine isn't as tech savvy. Thanks

    Read the article

  • How to automatically restart Apache service after HTTP 503 error?

    - by Gnanam
    Our production server is running Apache v2.2.4 on CentOS5.2. Mono v1.2.4 is integrated within Apache. Recently, we faced a problem in our production server. From Apache's access_log, I found a HTTP 500 internal server error for one of the HTTP request and all subsequent HTTP requests also failed but with HTTP 503 service unavailable error. From thereafter, none of the requests were successful. Also, only later some time, we realized that our application was not working because of this error and then we restarted Apache service. My questions are, in this kind of situation, how do I automatically restart Apache service when HTTP 503 error is encountered? Is there any Apache directive available to set? in general, what would cause a HTTP 503 error in Apache? NOTE: Mono helps in running applications developed in .NET on a Linux-based OS. EDIT: I agree on finding the root cause of this problem. In fact, we've been analyzing that too. Till we resolve it, am finding whether this could be restarted immediately on its own without having any downtime/service disruption for application users.

    Read the article

  • Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user

    - by MikeC
    My employer has an Active Directory group policy which sets my Windows 7 laptop HOMEDRIVE to "M:" (a mapped network drive) and my HOMEPATH to "\". Since I have read-only permissions for the root of that shared drive, I cannot create files or directories in my windows home directory. My attempts to work with the IT department have been unsuccessful. Is there a way for me to globally change these envars at boot or login time? I need for all applications to use alternate values (such as "C:" and "\Users\myname"). I have some installed utilities (like gvim and others) that store preference files in the user's home directory. IMPORTANT: Changing these envars under "System Properties Environment Variables" does not work. I have tried setting these as both User and System Variables (including a reboot). TypingSET HOMEin a DOS window clearly shows that my settings are ignored. Also, using "Start in" in a Windows shortcut will also not solve this, as I need things like Explorer context menu items (like "Edit with Vim") to operate correctly. I do have admin rights on this company laptop, but I am not a Win7 guru. Back in the day, a boot script would have solved this in a minute. Is it even possible today? Thanks.

    Read the article

  • 2011 i7 Macbook Pro unable to boot from any Windows CD?

    - by Craig Otis
    I'm encountering issues installing Windows alongside my Lion install. I'm attempting to install from the internal SuperDrive, after using Boot Camp to partition what was a single, HFS+ volume. When holding down Option at boot, the CD appears in the startup list, but upon selecting it, I get a gray screen for 5 minutes, then a flashing white folder. I tried installing rEFIt and using this to boot the CD, but I receive an error about "Not Found" being returned from the "LocateDevicePath", and a mention of the firmware not supporting booting using legacy methods. In the Console, when opening the StartupDisk preference pane (which never presents the CD as a selectable option), I see: 11/25/11 4:39:31.159 PM System Preferences: isCDROM: 0 isDVDROM:1 11/25/11 4:39:31.159 PM System Preferences: mountable disk appeared: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.214 PM System Preferences: - So far so good, passing disk to System Searcher. 11/25/11 4:39:33.218 PM System Preferences: OSXCheck: No boot.efi in System Folder or volume root. 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD I'm at a loss here. I've done my research, but it sounds like most of the rEFIt errors of this nature are caused by installing from a thumbdrive, or an external drive. I'm using the internal SuperDrive. Also, I've tried this with two different disks: A Windows XP SP2 CD A Windows 7 x86 DVD Both are disks I've had around for years, and I've used them reliably in the past. The system is an early 2011 15" Macbook Pro, all firmware updates installed.

    Read the article

  • problems installing mysql and phpmyadmin to localhost

    - by Joel
    Hi guys, I know there have been many similar questions, but as far as I can tell, most of the other people have gotten further than I have... I'm trying to get a WAMP setup happening. I've got PHP and Apache running and talking to each other. PHP is in c:\PHP Apache is in it's default program files folder. mySQL is in it's default install location. I have localhost setup at D:\public_html\ I'm able to navigate to localhost and see html and php files. But I have a simple mySQL test file: <?php // hostname or ip of server (for local testing, localhost should work) $dbServer='localhost'; // username and password to log onto db server $dbUser='root'; $dbPass=''; // name of database $dbName='test'; $link = mysql_connect("$dbServer", "$dbUser", "$dbPass") or die("Could not connect"); print "Connected successfully<br>"; mysql_select_db("$dbName") or die("Could not select database"); print "Database selected successfully<br>"; // close connection mysql_close($link); ?> When I try and open this, I get "could not connect" Now, I haven't even created a database yet, because I can't log into mySQL with phpmyadmin-so I think I've done something wrong in my mySQL install because they aren't talking to each other. I guess my main question is how do I first create a database in mySQL to be sure I have even installed it correctly?

    Read the article

  • ntpdate cannot receive data

    - by Hengjie
    I have a problem where running ntpdate on my server doesn't return any data therefore I get the following error: [root@server etc]# ntpdate -d -u -v time.nist.gov 12 Apr 01:10:09 ntpdate[32072]: ntpdate [email protected] Fri Nov 18 13:21:21 UTC 2011 (1) Looking for host time.nist.gov and service ntp host found : 24-56-178-141.co.warpdriveonline.com transmit(24.56.178.141) transmit(24.56.178.141) transmit(24.56.178.141) transmit(24.56.178.141) transmit(24.56.178.141) 24.56.178.141: Server dropped: no data server 24.56.178.141, port 123 stratum 0, precision 0, leap 00, trust 000 refid [24.56.178.141], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 14:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 14:28:16.000 transmit timestamp: d3303975.1311947c Thu, Apr 12 2012 1:10:13.074 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 12 Apr 01:10:14 ntpdate[32072]: no server suitable for synchronization found I have tried Googling the 'no server suitable for synchronization found' error online and I have tried disabling my firewall (running iptables -L returns no rules). I have also confirmed with my DC that there are no rules that are blocking ntp (port 123). Does anyone have any ideas on how I may fix this? Btw, this is what the output should look like on a working server in another DC: 11 Apr 19:01:24 ntpdate[725]: ntpdate [email protected] Fri Nov 18 13:21:17 UTC 2011 (1) Looking for host 184.105.192.247 and service ntp host found : 247.conarusp.net transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) server 184.105.192.247, port 123 stratum 2, precision -20, leap 00, trust 000 refid [184.105.192.247], delay 0.18044, dispersion 0.00006 transmitted 4, in filter 5 reference time: d330364e.e956694f Wed, Apr 11 2012 18:56:46.911 originate timestamp: d3303765.8702d025 Wed, Apr 11 2012 19:01:25.527 transmit timestamp: d3303765.73b213e3 Wed, Apr 11 2012 19:01:25.451 filter delay: 0.18069 0.18044 0.18045 0.18048 0.18048 0.00000 0.00000 0.00000 filter offset: -0.00195 -0.00197 -0.00211 -0.00202 -0.00202 0.000000 0.000000 0.000000 delay 0.18044, dispersion 0.00006 offset -0.001970

    Read the article

  • Cisco Access switch is dropping large amount of end points

    - by user135458
    This afternoon, with no changes to the network, a switch suddenly started dropping off lots of connections. These connections would come back up a few minutes later, then another area connected to the switch would drop off. This is an older 4006 chassis switch which could in and of itself be a problem but I'm looking to see what else you all would look for in trying to find a root cause. Switch is connected via ports 1/1 and 1/2 in an etherchannel to a VSS core 1/1/42 and 2/1/42. Both sides are up and working however the CPU on the switch will spike up to 99% and that's when CRC errors start to hit the VSS core on one of those interfaces and end points start dropping off. We tried new transceivers and SFP's on each side of the link, same result. When we tried swapping the fiber patch cables on the access switch the CRC errors did not follow the fiber cables they stayed with port 1/2 on the access switch. So port 1/2 on the supervisor module looks like the culprit. We actually tried to create a new member of the ethernet channel by taking a fiber media converter to cat5 and make that a member of the port-channel but when we plugged it in you couldn't even reach the switch. I'm guessing that's unrelated and a problem with the media converter. As of right now we have left it in a state of only one fiber cable running to one side of the VSS core (1/1 Access Switch -- 2/1/42). I've sent some info into TAC and they are looking into the situation but does anyone else have any commands I could run or some troubleshooting I could look into in the meantime?

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >