Search Results

Search found 1898 results on 76 pages for 'fedora ds'.

Page 71/76 | < Previous Page | 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Deployed a Zend App on EC2 and its not working

    - by Gublooo
    Hey Guys, I'm confused about the problem and not sure if I can provide enough details. I have a php app built on Zend framework which I've successfully deployed on other hosting companies. I am now trying to move to Amazon EC2 I moved all my code and set my domain to point to the IP address. So far so good. Now when I access my home page say www.example.com - everything looks good - The home page opens up which means the IndexController is being called and the index method is properly executed which pulls data from the database and displays it in the index.phtml page. So this lets me to believe that everything is working fine. But every link I click on the home page whether its a simple contact us link - or any other action that I directly try to call even through the URL results in 404 Not Found The requested URL /user/add was not found on this server. Apache/2.2.9 (Fedora) Server at www.example.com Port 80 The interesting thing is the home page opens up fine when I call www.example.com but when I give the entire path which is www.example.com/index/index - I get the same above error. I have checked the logs and there are no errors. Has anyone encountered something similar or have any idea if I'm missing a simple step or something like rewrite rule maybe. Its running on LAMP Any ideas Thanks

    Read the article

  • mysql-python stopped working

    - by MAC
    This is a rather dumb question but i'am looking at a bizarre situation. I am running fedora and have python 2.6.5 installed. The other day i installed MySQL-python using yum (because i do not have the setuptools module so i cannot build it from source). Anyway yesterday i wrote my entire data access layer in python and it was running fine, i did test it. Today however it gives me an ImportError: No module named MySQLdb The only thing i ever changed was i installed eclipse and pyDev. Any ideas on what went wrong and how i fix it. I tried removing and re-installing MySql-python but that did not help. I did the following import sys print sys.path And it shows me all the paths which are basically pertaining to /usr/local/lib/python2.6 However i was trying to find where the MySQLdb module is installed and it seems that its installed in /usr/lib/python2.5/sitepackages Now i have no idea why it got installed there and why it was working earlier and why it stopped working now. Any ideas on how i should fix it. I did try copying the site-packages folder over to the python2.6 folder but that did not work Help!!

    Read the article

  • Eclipse on windows doesnt start

    - by sap
    I usually do all my java development on linux, using fedora package manager setting up a development environment is easy and fast. Now I have to start using windows but I never used it for java development and im having a few difficulties having it setup. So I downloaded and installed thye java 6 JDK (just the standard edition, not the EE) and installed it. Next I downloaded eclipse classic package, which doesnt have an installer, you just unzip it and run it. I had to add the java bin directory to the PATH variable, which I did. But when I start eclipse.exe I get this: http://img02.imagefra.me/img/img02/1/12/12/f_12c33ivd2m_c79c09f.jpg I already made a new environment variable called CLASSPATH and add the d:/java sdk/lib directory to it, but it the same thing. Am I missing something? Thanks. UPDATE: so i wrote the path to the java.exe on the eclipse.ini file (linking to jvm.dll didnt work) and now it just opens a console window for a few seconds and then closes (doesnt output anything). also launching it like: java -jar plugins/org.eclipse.equinox.launcher_1.0.0.v20070208a.jar make the vm work for about 1-2 seconds and then it returns, with no outputs. UPDATE2: i didnt know it was writting a log file, found it and read it and it said i was using GWT x32 libraries on a x64 VM, so i just downloaded an eclipse x64 version and it worked. i still had to use the .ini trick to say where the JVM is installed. thanks a lot for the help.

    Read the article

  • Tomcat reporting 404 error on all of newly deployed WAR files?

    - by dacracot
    I deployed a WAR file into $TOMCAT_HOME/webapps by copying the file into the directory, just like I've done a thousand times before. Tomcat detects the WAR and inflates it. I can traverse the directory tree on my server at the command line (it's Fedora). But when I address the webapp within my client machine's browser, I get nothing but 404 errors. This has happened to the last two deployments of completely separate WARs. The first was a replacement of an existing WAR. I first deleted the WAR and its inflated directory, and then copied in the WAR which inflated... 404. I deleted everything again, put back the previously working WAR from backup. It inflated and worked. The second was a completely new, never before deployed WAR... nothing but 404. Other WARs are working, but now I'm afraid to change anything until I know what is going on. Any clues? Edit: From my comment you can see that the logs included "SEVERE: Error listenerStart" after the WAR was deployed by Tomcat. There were no stack traces or other errors reported.

    Read the article

  • no such file to load -- rubygems (LoadError)

    - by Vineeth
    Hello, I recently installed rails in fedora 12. I'm new to linux as well. Everything works fine on Windows 7. But I'm facing lot of problems in linux. Help please! I've installed all the essentials to my knowledge to get the basic script/server up and running. I have this error from boot.rb popping up when I try script/server. Some of the details I'd like to give here: The directories where rails, ruby and gem are installed, [vineeth@localhost my_app]$ which ruby /usr/local/bin/ruby [vineeth@localhost my_app]$ which rails /usr/bin/rails [vineeth@localhost my_app]$ which gem /usr/bin/gem And when I run the script/server, this is the error. [vineeth@localhost my_app]$ script/server ./script/../config/boot.rb:9:in `require': no such file to load -- rubygems (LoadError) from ./script/../config/boot.rb:9 from script/server:2:in `require' from script/server:2 And the PATH file looks like this [vineeth@localhost my_app]$ cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH="/usr/local/bin:/usr/local/sbin:/usr/bin/ruby:$PATH" I suppose it is something to do with the PATH file. Let me know what I need to change here. If there are other changes I should make, please let me know, Thanks

    Read the article

  • Filtering most out of XML with XSL?

    - by Gnudiff
    I need to transform a lot of XML files (Fedora export) into a different kind of XML. Trying to do it with XSL stylesheets and checking with the msxsl transformer. Supposedly I have xml file like this (assuming there are actually other nodes inside AAA, OBJ, amd all other nodes), Source.XML: <DOC> <AAA> <STUFF>example</STUFF> <OBJ> <OBJVERS id="A1" CREATED="2008-02-18T13:28:08.245Z"/> <OBJVERS id="A2" CREATED="2008-02-19T10:42:41.965Z"/> <OBJVERS id="A13" CREATED="2009-03-16T12:43:11.703Z"/> </OBJ> </AAA> <FFF/> <GGG/> <DDD> <FILE /> </DDD> </DOC> Which I need to look something like this (Target.XML): <MYOBJ> <ELEM>contents of OBJVERS with the biggest id OR creation date (whichever is easier to do) go here</ELEM> <IMAGE> contents of <FILE> node go here</IMAGE> </MYOBJ> The main problem that I have is that since I am new to XSL (and for this particular task do not have enough time to learn it properly) is that I can't understand how to tell XSL processor not to process anything else, I keep getting output from , for example. Update: basically, I solved this problem meanwhile. I will post my own answer and close the question. Update2: OK, Andrew's answer works, too, so I am just accepting it. :)

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • First could access the repository next cannot

    - by Banani
    Hi! I have configured svnserve (1.6.5,plain, without apache) on Fedora 12. I could ran the following svn subcommands which access the repository after configuration. Such as, commit, update,checkout, list. But, when next time ( after stopping,ctrl-c and then starting svnserve)I tried above commands, could not access the repository. This is happening both from local and remote machine. I ran svn and svnserve as below. 'svn commit svn://127.0.0.1/myrepository/' from local client. 'svnserve -d --foregorund --listen-port=3690 -r /path-to-repository/mypository/' To understand the problem better, I created another repository and found similar behavior . Frist I could access the repository and next I could not. I tried doing strace on svnserve, but don't uderstand much of it. Below is the partial output. accept(3, {sa_family=AF_INET, sin_port=htons(54425), sin_addr=inet_addr("127.0.0 .1")}, [16]) = 4 fcntl64(4, F_GETFD) = 0 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 waitpid(-1, 0xbfcdf31c, WNOHANG|WSTOPPED) = -1 ECHILD (No child processes) clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, chil d_tidptr=0xb7743758) = 9737 close(4) = 0 accept(3, 0xbfcdf2bc, [128]) = ? ERESTARTSYS (To be restarted) --- SIGCHLD (Child exited) @ 0 (0) --- sigreturn() = ? (mask now []) waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG|WSTOPPED) = 9737 waitpid(-1, 0xbfcdf31c, WNOHANG|WSTOPPED) = -1 ECHILD (No child processes) My question: Why user are not able to access the repository anymore? What information the strace output gives about this problem? Any help is much appreciated. Thanks. Banani

    Read the article

  • X-Forwarded-For causing Undefined index in PHP

    - by bateman_ap
    Hi, I am trying to integrate some third party tracking code into one of my sites, however it is throwing up some errors, and their support isn't being much use, so i want to try and fix their code myself. Most I have fixed, however this function is giving me problems: private function getXForwardedFor() { $s =& $this; $xff_ips = array(); $headers = $s->getHTTPHeaders(); if ($headers['X-Forwarded-For']) { $xff_ips[] = $headers['X-Forwarded-For']; } if ($_SERVER['REMOTE_ADDR']) { $xff_ips[] = $_SERVER['REMOTE_ADDR']; } return implode(', ', $xff_ips); // will return blank if not on a web server } In my dev enviroment where I am showing all errors I am getting: Notice: Undefined index: X-Forwarded-For in /sites/webs/includes/OmnitureMeasurement.class.php on line 1129 Line 1129 is: if ($headers['X-Forwarded-For']) { If I print out $headers I get: Array ( [Host] => www.domain.com [User-Agent] => Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 [Accept] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [Accept-Language] => en-gb,en;q=0.5 [Accept-Encoding] => gzip,deflate [Accept-Charset] => ISO-8859-1,utf-8;q=0.7,*;q=0.7 [Keep-Alive] => 115 [Connection] => keep-alive [Referer] => http://www10.toptable.com/ [Cookie] => PHPSESSID=nh9jd1ianmr4jon2rr7lo0g553; __utmb=134653559.30.10.1275901644; __utmc=134653559 [Cache-Control] => max-age=0 ) I can't see X-Forwarded-For in there which I think is causing the problem. Is there something I should add to the function to take this into account? I am using PHP 5.3 and Apache 2 on Fedora

    Read the article

  • Can the Subversion client (svn) derefence symbolic links as if they were files?

    - by Ryan B. Lynch
    I have a directory on a Linux system that mostly contains symlinks to files on a different filesystem. I'd like to add the directory to a Subversion repository, dereferencing the symlinks in the process (treating them as the files they point to, rather than links). Generally, I'd like to be able to handle any working-copy operations with this behavior, but the 'svn add' command is where it starts, I think. The SVN client utility doesn't appear to have any options related to symlink dereferencing in the working copy. I didn't find any references to this in the manual (http://svnbook.red-bean.com/en/1.5/index.html), either. I found a poster on the SVN users mailing list who asked the same question but never received an answer, here: http://markmail.org/message/ngchfnzlmm43yj7h (That poster ended up using hard links instead of symlinks. That technique is not an option, in my case, because the real underlying files reside on a separate filesystem.) I'm using Subversion v1.6.1 on Fedora 11. For what it's worth, I know that there are alternative tools/techniques that could help approximate this behavior, but which I have to discard for various reasons. I've already considered [and dust-binned] these possibilities: - a "union" mount, merging all of the the directories containing the real files, with the SVN working-copy directory as the "top" layer in the union; - copying/moving the real files to the same filesystem as the SVN working-copy, and using hardlinks instead of symlinks; - non-SVN version control systems. These were all neat ideas, and I'm sure they are good solutions to other problems, but they won't work given the constraints of this environment and situation.

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • How do you fix loading plugins in eclipse 3.5.1 on linux?

    - by Jay R.
    I have two linux boxes. Both Fedora 11 x64. On one, I downloaded the eclipse-java-galileo-SR1-linux-gtk-x86_64.tar.gz. I unpacked it to /opt/eclipse-3.5.1/ and used the Install New Software... item to install the SVN team provider and the Polarion SVN connectors. Everything works. On the second, I copied the tar.gar for eclipse there, and then tried to follow the same steps. When I get to the install SVN team provided, eclipse downloads it and claims to install it and asks to restart. I restart and there is no SVN support. The software installer knows its there because I can't reinstall it without uninstalling it. So the questions: Why isn't the plugin/feature loading for the SVN Team Support? Is there a checkbox that I forgot about that enables the plugin? Is there a command line option that will force reload all of the features on the disk? I've tried to install other things like findbugs, but I get the same result. I have no messages in the log file indicating an exception or anything like that.

    Read the article

  • Make Errors: Missing Includes in C++ Script?

    - by Abs
    Hello all, I just got help in how to compile this script a few mintues ago on SO but I have managed to get errors. I am only a beginner in C++ and have no idea what the below erros means or how to fix it. This is the script in question. I have read the comments from some users suggesting they changed the #include parts but it seems to be exactly what the script has, see this comment. [root@localhost wkthumb]# qmake-qt4 && make g++ -c -pipe -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -Wall -W -D_REENTRANT -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/lib/qt4/mkspecs/linux-g++ -I. -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include -I. -I. -I. -o main.o main.cpp main.cpp:5:20: error: QWebView: No such file or directory main.cpp:6:21: error: QWebFrame: No such file or directory main.cpp:8: error: expected constructor, destructor, or type conversion before ‘*’ token main.cpp:11: error: ‘QWebView’ has not been declared main.cpp: In function ‘void loadFinished(bool)’: main.cpp:18: error: ‘view’ was not declared in this scope main.cpp:18: error: ‘QWebSettings’ has not been declared main.cpp:19: error: ‘QWebSettings’ has not been declared main.cpp:20: error: ‘QWebSettings’ has not been declared main.cpp: In function ‘int main(int, char**)’: main.cpp:42: error: ‘view’ was not declared in this scope main.cpp:42: error: expected type-specifier before ‘QWebView’ main.cpp:42: error: expected `;' before ‘QWebView’ make: *** [main.o] Error 1 I have the web kit on my Fedora Core 10 machine: qt-4.5.3-9.fc10.i386 qt-devel-4.5.3-9.fc10.i386 Thanks all for any help

    Read the article

  • [RPM Building] How to take user input during install

    - by Sam
    So when I create a debian package, I am able to write a post-installation shell script that runs just fine. Currently mine is configured to do echo "Please enter your MySQL Database user (default root)" read MYSQL_USER echo "Please enter the MySQL Database user password (default root)" read -s MYSQL_PASS DBEXIST=0 CMD="create database lportal;use lportal;" (mysql -u$MYSQL_USER -p$MYSQL_PASS -e "$CMD") || ((DBEXIST++)) if [ $DBEXIST -ne 0 ]; then echo "Setup finished, but MySQL already has an lportal table. This could be from a previous installation of Liferay. If you want a fresh installation of this bundle, please remove the lportal table and reinstall this package." fi This works fine for Ubuntu. However, I can't seem to get user input to work with RPMs for Fedora. Is there a good way to take user input? From what I understand, RPMs were designed not to allow interactive installs. However I can't see a better way to do this.. Is there possibly a way to automatically find local MySQL settings without asking the user? Otherwise, what's the best way to ask for user input?

    Read the article

  • Configure Django project in a subdirectory using mod_python. Admin not working.

    - by David
    HI guys. I was trying to configure my django project in a subdirectory of the root, but didn't get things working.(LOcally it works perfect). I followed the django official django documentarion to deploy a project with mod_python. The real problem is that I am getting "Page not found" errors, whenever I try to go to the admin or any view of my apps. Here is my python.conf file located in /etc/httpd/conf.d/ in Fedora 7 LoadModule python_module modules/mod_python.so SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonOption django.root /mysite PythonDebug On PythonPath "['/var/www/vhosts/mysite.com/httpdocs','/var/www/vhosts/mysite.com/httpdocs/mysite'] + sys.path" I know /var/www/ is not the best place to put my django project, but I just want to send a demo of my work in progress to my customer, later I will change the location. For example. If I go to www.domain.com/mysite/ I get the index view I configured in mysite.urls. But I cannot access to my app.urls (www.domain.com/mysite/app/) and any of the admin.urls.(www.domain.com/mysite/admin/) Here is mysite.urls: urlpatterns = patterns('', url(r'^admin/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'), (r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done'), (r'^reset/(?P<uidb36>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), (r'^reset/done/$', 'django.contrib.auth.views.password_reset_complete'), (r'^$', 'app.views.index'), (r'^admin/', include(admin.site.urls)), (r'^app/', include('mysite.app.urls')), (r'^photologue/', include('photologue.urls')), ) I also tried changing admin.site.urls with ''django.contrib.admin.urls' , but it didn't worked. I googled a lot to solve this problem and read how other developers configure their django project, but didn't find too much information to deploy django in a subdirectory. I have the admin enabled in INSTALLED_APPS and the settings.py is ok. Please if you have any guide or telling me what I am doing wrong it will be much appreciated. THanks.

    Read the article

  • How to check successful copy() function execution in php?

    - by OM The Eternity
    How to check successful copy() function execution in php? I am using the following code: <? function full_copy( $source, $target ) { if ( is_dir( $source ) ) { @mkdir( $target ); $d = dir( $source ); while ( FALSE !== ( $entry = $d->read() ) ) { if ( $entry == '.' || $entry == '..' ) { continue; } $Entry = $source . '/' . $entry; if ( is_dir( $Entry ) ) { full_copy( $Entry, $target . '/' . $entry ); continue; } copy( $Entry, $target . '/' . $entry ); } $d->close(); }else { copy( $source, $target ); } } $source ='.'; $destination = '/html/parth/'; full_copy($source, $destination); ?> I do not get anything in my parth folder. Why? I am using Windows, Script is executed on fedora system (its my server)...

    Read the article

  • PHP bcompiler install

    - by dobs
    How to install bcompiler on Fedora, have this error: [root@server server]# pecl install channel://pecl.php.net/bcompiler-0.9.1 downloading bcompiler-0.9.1.tgz ... Starting to download bcompiler-0.9.1.tgz (47,335 bytes) .............done: 47,335 bytes 10 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-server/bcompiler-0.9.1 running: /var/tmp/bcompiler/configure checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes .... /var/tmp/bcompiler/bcompiler.c:2174: ?????????: expected ‘struct zend_arg_info *’ but argument is of type ‘const struct _zend_arg_info *’ /var/tmp/bcompiler/bcompiler.c: ? ??????? ‘apc_serialize_zend_class_entry’: /var/tmp/bcompiler/bcompiler.c:3100: ??????????????: ???????????? ???????? ????????????? ?????????? ???? /var/tmp/bcompiler/bcompiler.c:3107: ??????????????: ???????? ????????? 1 ‘apc_serialize_zend_function_entry’ ???????? ????????????? ?????????? ???? /var/tmp/bcompiler/bcompiler.c:2874: ?????????: expected ‘struct zend_function_entry *’ but argument is of type ‘const struct _zend_function_entry *’ /var/tmp/bcompiler/bcompiler.c: ? ??????? ‘apc_deserialize_zend_class_entry’: /var/tmp/bcompiler/bcompiler.c:3324: ??????????????: ???????? ????????? 1 ‘apc_deserialize_zend_function_entry’ ???????? ????????????? ?????????? ???? /var/tmp/bcompiler/bcompiler.c:2900: ?????????: expected ‘struct zend_function_entry *’ but argument is of type ‘const struct _zend_function_entry *’ make: *** [bcompiler.lo] ?????? 1 ERROR: 'make' failed yum install bzip2-libs bzip2-devel - Fix it...

    Read the article

  • Errors when switching to specific static IP

    - by michaelc
    I had a Fedora box running using my static IP 69.169.136.6, etc, all configured according to what the ISP required. Just recently the hard drive failed (and I should have been keeping better backups) - while it is being recovered I would like to put up a webpage on my Archlinux PC explaining the problem - I presently do not have sufficient access to change the DNS record assigned to the domain. When I change my ip address while my system is running to 69.169.136.6, ifconfig reports the new ip address, but http://whatismyip.com/ does not. When I change it and reboot, I can't ping - the message I recieve is "connect: Network is unreachable" (when given one of google.com 's IP addresses - hostnames give me ping: unknown host xxx). Until I have access to the DNS system, what can I do to make this work? Edit: With new IP address, same problem, IP is now 69.169.136.29. Some commands might be useful: #ping 69.169.136.1 PING 69.169.136.1 (69.169.136.1) 56(84) bytes of data. 64 bytes from 69.169.136.1: icmp_seq=1 ttl=64 time=0.377 ms #ping 69.169.190.211 connect: Network is unreachable #ping 208.72.160.67 connect: Network is unreachable #ifconfig eth0 Link encap:Ethernet HWaddr 00:E0:4D:97:23:9B inet addr:69.169.136.29 Bcast:69.169.137.255 Mask:255.255.254.0 inet6 addr: fe80::2e0:4dff:fe97:239b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:132091 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9635179 (9.1 Mb) TX bytes:1322 (1.2 Kb) Interrupt:29 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2480 (2.4 Kb) TX bytes:2480 (2.4 Kb) #ip route 69.169.136.0/23 dev eth0 proto kernel scope link src 69.169.136.29 #cat /etc/resolv.conf # Generated by dhcpcd #nameserver 208.67.222.222 #nameserver 208.67.220.220 nameserver 69.169.190.211 nameserver 208.72.160.67 # /etc/resolv.conf.tail can replace this line Update: have new static IP addresses, verified to work in Windows... Relevant portions of /etc/rc.conf below: #Static IP example #eth0="eth0 69.169.136.6 netmask 255.255.254.0 broadcast 69.169.136.1" #eth0="eth0 69.169.136.29 netmask 255.255.254.0 broadcast 69.169.137.255" eth0="eth0 69.169.136.32 netmask 255.255.254.0 broadcast 69.169.137.255" #eth0="dhcp" INTERFACES=(eth0) # Routes to start at boot-up (in this order) # Declare each route then list in ROUTES # - prefix an entry in ROUTES with a ! to disable it # #gateway="default gw 192.168.0.1" gateway="default gw 69.169.136.1" #gateway="69.169.136.1" ROUTES=(!gateway) #ROUTES=()

    Read the article

  • Can't ssh tunnel to access a remote mysql server

    - by hobbes3
    I can't seem to figure out why I can't use ssh tunnel to connect to my remote MySQL server. I do ssh tunnel with [hobbes3@hobbes3] ~ $ ssh linode -L 3307:localhost:3306 Then on another terminal, I try [hobbes3@hobbes3] ~ $ mysql -h localhost -P 3307 -u root --protocol=tcp -p Enter password: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2 On the server, it shows this: root@li534-120 ~ # channel 4: open failed: connect failed: Connection refused Here is my my.cnf on the server: [mysqld] # Settings user and group are ignored when systemd is used (fedora >= 15). # If you need to run mysqld under different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd user=mysql datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Semisynchronous Replication # http://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html # uncomment next line on MASTER ;plugin-load=rpl_semi_sync_master=semisync_master.so # uncomment next line on SLAVE ;plugin-load=rpl_semi_sync_slave=semisync_slave.so # Others options for Semisynchronous Replication ;rpl_semi_sync_master_enabled=1 ;rpl_semi_sync_master_timeout=10 ;rpl_semi_sync_slave_enabled=1 # http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html ;performance_schema [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqld] port = 3306 socket=/var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 64M max_allowed_packet = 128M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache = 8 max_connections = 25 query_cache_size = 16M table_open_cache = 1024 table_definition_cache = 1024 tmp_table_size = 32M max_heap_table_size = 32M bind-address = 0.0.0.0 Now sure if this helps but here is the MySQL user list: mysql> select * from mysql.user; +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | localhost | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | 127.0.0.1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | ::1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ 3 rows in set (0.00 sec) I read about how MySQL treats localhost vs 127.0.0.1 as connecting via a socket or TCP, respectively. But I'm starting to get confused on what's really going on or if socket vs TCP is even the issue. Thanks in advance and I'm open for any tips and suggestions! Some more info: My MySQL client, running OS X 10.8.4, is mysql Ver 14.14 Distrib 5.6.10, for osx10.8 (x86_64) using EditLine wrapper My MySQL server, running on CentOS 6.4 32-bit, is mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+--------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------+ | innodb_version | 1.1.8 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.28 | | version_comment | MySQL Community Server (GPL) by Remi | | version_compile_machine | i686 | | version_compile_os | Linux | +-------------------------+--------------------------------------+ 7 rows in set (0.00 sec)

    Read the article

  • How can I install things in Linux with *no yum* and *no wget*?

    - by e9t
    I'm a newbie to Linux (that mainly uses Windows and Mac OS X) needing some advice. I was trying to install git on a Linux machine today, and encountered some problems: Not knowing the version of the installed OS, I've opened the /proc/version file which said: Linux version 2.6.9-42.0.2.ELsmp ([email protected]) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-3)) #1 SMP Thu Aug 17 17:57:31 EDT 2006 Then, as written in the git documents (http://git-scm.com/download/linux), I assumed I could use the yum install git command for Fedora, but got the following result. [root@myserver ~]# yum install git -bash: yum: command not found So I tried installing yum using wget, but wasn't so lucky. [root@myserver ~]# wget http://linux.duke.edu/projects/yum/download/2.0/yum-2.0.7.tar.gz -bash: wget: command not found I googled and found this page and this page, so tried installing yum with rpm, but only got a result full of question marks. (Possibly an encoding problem, hmm...) [root@myserver ~]# rpm -Uvh http://www.eomy.net/linux/install-yum-x86_64/wget-1.10.2-0.40E.x86_64.rpm http://www.eomy.net/linux/install-yum-x86_64/wget-1.10.2-0.40E.x86_64.rpm(??)?? ?????? ?: /var/tmp/rpm-xfer.TbuAOu: V3 DSA signature: NOKEY, key ID 443e1821 ???.. ########################################### [100%] wget-1.10.2-0.40E U???????g??????? wget-1.10.2-0.40E???? ??g??/usr/bin/wget ?? wget-1.10.2-0.40E U?????? ???? wget-1.10.2-0.40E???? ??g??/usr/share/man/man1/wget.1.gz ?? wget-1.10.2-0.40E U?????? ???? [root@myserver ~]# Finally, when I typed rpm --version in the terminal, I got the below results. [root@myserver ~]# rpm --version RPM ???? - 4.3.3 I would like to know what I can do or possibly try now. Is it not possible to wget or yum anything in my situation? Or is there any magical tool like homebrew (http://mxcl.github.com/homebrew/) that I can use? Any comments or advice would be appreciated. Thanks in advance!

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Apache Simple Configuration Issue: Setting up per-user directory permission denied problem

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work? [Added after comments] Hi Andol, thank you for your suggestions. So, first off, you said that you assume that I have the userdir module enabled, but I am not sure what that means exactly. That could be part of the problem. I do have the Module loaded, using the LoadModule directive. I have this: LoadModule userdir_module modules/mod_userdir.so I also did a find on where mod_userdir is, and I found it located here: [huckphin@crhyner-workbox]/% find / . -name '*mod_userdir.so*' 2> /dev/null /usr/lib64/lighttpd/mod_userdir.so /usr/lib64/httpd/modules/mod_userdir.so Is there something else I need to enable? Also, my directory configuration was mentioned. I have uncommented the default configuration given. I have not looked into what all of the configurations mean, and this could probably explain the issue. Here is the Directory that I have for the user directories: <Directory "/home/*/public_html"> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory>

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Configuring gmail for use on mailing lists

    - by reemrevnivek
    This is really two questions in one. First, are nettiquette guidelines still accurate in their restrictions on ASCII vs. HTML, posting style, and line length? (Here's a recent metafilter discussion of the topic.) Second, If they are not, should these guidelines be respected? If they are (or if they should still be respected), how can modern mail programs be configured to work properly with them? Most mailing list etiquette statements appear to have been written by sysadmins who loved their command lines, and refuse to change anything. Many still reference rfc1855, written in 1995. Just reading that paginated TXT should give you an idea of the climate at the time. Here's a short, fairly random list of mailing list etiquette statements with some extracted formatting guidelines: Mozilla - HTML discouraged, interleaved posting. FreeBSD - No HTML, don't top post, line length at 75 characters. Fedora - No HTML, bottom-post. You get the idea. You've all seen etiquette statements before. So, assuming that the rules should be obeyed (Usually a good idea), what can be done to allow me to still use a modern mail program, and exchange mail with friends who use the same programs? We like to format our mail. Bold headings, code snippets (sometimes syntax highlighted, if the copy-paste pulls RTF text as from XCOde and Eclipse), free line breaks determined by your browser width, and the (very) occasional image make the message easier to read. Threaded conversations are a wonderful thing. Broadband connections are, I'm sure, the rule for most of the users of SU and of developer mailing lists, disk space is cheap, and so the overhead of HTML is laughable. However, I don't want to post a question to a mailing list and have the guru who can answer my question automatically delete it, or come off as uncaring. Until I hear otherwise, I'll continue to respect the rules as best I can. For a common example of the problem, Gmail, by default, sends HTML formatted messages with bottom-posted quotes (which are folded in, just read the last message immediately above), and uses the frame width to wrap lines, rather than a character count. ASCII can be selected, and quotes can be moved and reversed, but line wraps of quotes don't work, line breaks are tedious to add (and more tedious to read, if they're super small in comparison to the width of the frame). Is there a forwarding, free mail program which can help with this exercise? Should an "RFC1855 mode" lab be written? Or do I have to go to the command line for my mailing lists, and gmail for my other mail?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76  | Next Page >