Search Results

Search found 8408 results on 337 pages for 'cgi bin'.

Page 321/337 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Why does Rails with Passenger/nginx only works in development mode? No logs available

    - by Michael W.
    Hey folks, I have a serious problem with one of our webservers... after having an internal alpha-testing with a mongrel/haproxy-cluster that worked well, we wanted to use nginx with passenger for our first production server (customers will access this server). However, I can only run the rails app via development mode with passenger/nginx. The app itself runs perfect with mongrel or webrick in production mode. My biggest problem with this case is that I don't find ANY information in the nginx or rails-logs (only when I use mongrel or webrick). Permissions are correct. Passenger-status shows that the app is running, but I always get the static 500.html-error page... It would be so nice if you guys could give me a hint and help me solve the problem. I put the config at the bottom of the post... This exact config works with rails_env development;but I'd like to use the production mode ;-) Thank you very much for your help! Version: Ubuntu 8.04.2 64bit / nginx-0.7.64 (compiled and installed via passenger-2.2.11) cat /opt/nginx/conf/nginx.conf user www-data; worker_processes 4; error_log logs/error.log; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /usr/bin/ruby1.8; passenger_log_level 3; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name <<servername>>; root /srv/app01/public; passenger_enabled on; }

    Read the article

  • Read from file in eclipse

    - by Buzkie
    I'm trying to read from a text file to input data to my java program. However, eclipse continuosly gives me a Source not found error no matter where I put the file. I've made an additional sources folder in the project directory, the file in question is in both it and the bin file for the project and it still can't find it. I even put a copy of it on my desktop and tried pointing eclipse there when it asked me to browse for the source lookup path. No matter what I do it can't find the file. here's my code in case it's pertinent: System.out.println(System.getProperty("user.dir")); File file = new File("file.txt"); Scanner scanner = new Scanner(file); in addition, it says the user directory is the project directory and there is a copy there too. I have no clue what to do. Thanks, Alex after attempting the suggestion below and refreshing again, I was greeted by a host of errors. FileNotFoundException(Throwable).<init>(String) line: 195 FileNotFoundException(Exception).<init>(String) line: not available FileNotFoundException(IOException).<init>(String) line: not available FileNotFoundException.<init>(String) line: not available URLClassPath$JarLoader.getJarFile(URL) line: not available URLClassPath$JarLoader.access$600(URLClassPath$JarLoader, URL) line: not available URLClassPath$JarLoader$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] URLClassPath$JarLoader.ensureOpen() line: not available URLClassPath$JarLoader.<init>(URL, URLStreamHandler, HashMap) line: not available URLClassPath$3.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] URLClassPath.getLoader(URL) line: not available URLClassPath.getLoader(int) line: not available URLClassPath.access$000(URLClassPath, int) line: not available URLClassPath$2.next() line: not available URLClassPath$2.hasMoreElements() line: not available ClassLoader$2.hasMoreElements() line: not available CompoundEnumeration<E>.next() line: not available CompoundEnumeration<E>.hasMoreElements() line: not available ServiceLoader$LazyIterator.hasNext() line: not available ServiceLoader$1.hasNext() line: not available LocaleServiceProviderPool$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] LocaleServiceProviderPool.<init>(Class<LocaleServiceProvider>) line: not available LocaleServiceProviderPool.getPool(Class<LocaleServiceProvider>) line: not available NumberFormat.getInstance(Locale, int) line: not available NumberFormat.getNumberInstance(Locale) line: not available Scanner.useLocale(Locale) line: not available Scanner.<init>(Readable, Pattern) line: not available Scanner.<init>(ReadableByteChannel) line: not available Scanner.<init>(File) line: not available code used: System.out.println(System.getProperty("user.dir")); File file = new File(System.getProperty("user.dir") + "/file.txt"); Scanner scanner = new Scanner(file);

    Read the article

  • DLL configuration file in asp.net site

    - by Tominator
    Hi, I've made a .net 2.0 librabry project, that results in a dll. I've made an app.config file in my project, with settings used in the dll, with the intention that they can be changed later. I'm attempting to use the dll in an asp.net web application now, so I made the reference to my other project's output, and I see that the dll is copied over to the site's bin folder, and everything works. However, the configuration file is not copied. When I manually copy the app.config and rename it to myDll.config, it has no influence. The contents of the config file is approximately this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="myDLL.My.MySettings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <applicationSettings> <myDLL.My.MySettings> <setting name="myDLL_webservice_Service" serializeAs="String"> <value>https://myhost/Service.asmx</value> </setting> <setting name="ID" serializeAs="String"> <value>6</value> </setting> </myDLL.My.MySettings> </applicationSettings> </configuration> And I use its settings in the dll with this (vb.net code): Private _id As Long = My.Settings.ID How can I put my config information somewhere so it can be used? In the web.config of the site application? That has only the appSettings section, and it uses the syntax. It doesn't appear to work though. In a custom file format that I create and use? Not that pretty..

    Read the article

  • jQuery - Form input return confirm to delete

    - by bruno
    hello guys. I struggled a lot before posting here :) now, I want to replace my default javascript confirmation for deleting a file. I saw a lot of examples here, but no example with form input. Now I have his form: <form action="delete.php" method="post"> <input type="hidden" name="id" value="<{$pid}>" /> <input type="hidden" name="picture" value="<{$lang_del_pic}>" /> <input type="image" src="<{xoImgUrl}>img/del-icon.gif" width="16" height="16" align="bottom" border="0" alt="Delete media" name="pictured" value="<{$lang_del_pic}>" onclick="javascript: return confirm('<{$lang_confirm_del}>');" /> </form> Now, I did everything, I have this div: <div id="dialog-confirm" title="Empty the recycle bin?"> <p><span class="ui-icon ui-icon-alert" style="float:left; margin:0 7px 20px 0;"></span>These items will be permanently deleted and cannot be recovered. Are you sure?</p> </div> this javascript: <script type="text/javascript"> $(document).ready(function() { $("#dialog").dialog({ autoOpen: false, modal: true }); }); $(".confirmLink").click(function(e) { e.preventDefault(); var targetUrl = $(this).attr("href"); $("#dialog").dialog({ buttons : { "Confirm" : function() { window.location.href = targetUrl; }, "Cancel" : function() { $(this).dialog("close"); } } }); $("#dialog").dialog("open"); }); </script> and this new form: <form name="dialog-confirm" id="dialog-confirm" method="post"> <input type="hidden" name="id" value="<{$pid}>" /> <input type="hidden" name="picture" value="<{$lang_del_pic}>" /> <input type="image" src="<{xoImgUrl}>img/del-icon.gif" width="16" height="16" align="bottom" border="0" alt="Delete media" name="pictured" value="" id="opener" /> </form> On press, I call successfuly the jQuery modal diolog, and everything works, but somehow, when I press 'delete all' the script tells me - "the script is called without the necessary parameters" Now I guess I am failig to send the pic ID to be deleted with the jQuery, .. but do not know how to fix it. Any ideas ?

    Read the article

  • Add class to elements which already have a class

    - by bwstud
    I have a group of divs which I'm dynamically generating when a button is clicked with the class, "brick". This gives them dimension and starting position of top: 0. I'm trying to get them to animate to the bottom of the view using a css transition with a second class assignment which gives them a bottom position: 0;. Can't figure out the syntax for adding a second class to elements with a pre-existing class. On inspection they only show the original class of, "brick". HTML <!DOCTYPE html> <html> <head> <script src="http://code.jquery.com/jquery-2.1.0.min.js"></script> <meta charset="utf-8"> <title>JS Bin</title> </head> <body> <div id="container"> <div id="button" >Click Me</div> </div> </body> </html> CSS #container { width: 100%; height: 100vh; padding: 10vmax; } #button { position: fixed; } .brick { position: relative; top: 0; height: 10vmax; width: 20vmax; background: white; margin: 0; padding: 0; transition: all 1s; } .drop { transition: all 1s; bottom 0; } The offending JS: var brickCount = function() { var count = prompt("How many boxes you lookin' for?"); for(var i=0; i < count; i++) { var newBrick = document.createElement("div"); newBrick.className="brick"; document.querySelector("#container") .appendChild(newBrick); } }; var getBricks = function(){ document.getElementByClass("brick"); }; var changeColor = function(){ getBricks.style.backgroundColor = '#'+Math.floor(Math.random()*16777215).toString(16); }; var addDrop = function() { getBricks.brick = "getBricks.brick" + " drop"; }; var multiple = function() { brickCount(); getBricks(); changeColor(); addDrop(); }; document.getElementById("button").onclick = function() {multiple();}; Thanks!

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • How can I pipe input to a Java app with Perl?

    - by user319479
    I need to write a Perl script that pipes input into a Java program. This is related to this, but that didn't help me. My issue is that the Java app doesn't get the print statements until I close the handle. What I found online was that $| needs to be set to something greater than 0, in which case newline characters will flush the buffer. This still doesn't work. This is the script: #! /usr/bin/perl -w use strict; use File::Basename; $|=1; open(TP, "| java -jar test.jar") or die "fail"; sleep(2); print TP "this is test 1\n"; print TP "this is test 2\n"; print "tests printed, waiting 5s\n"; sleep(5); print "wait over. closing handle...\n"; close TP; print "closed.\n"; print "sleeping for 5s...\n"; sleep(5); print "script finished!\n"; exit And here is a sample Java app: import java.util.Scanner; public class test{ public static void main( String[] args ){ Scanner sc = new Scanner( System.in ); int crashcount = 0; while( true ){ try{ String input = sc.nextLine(); System.out.println( ":: INPUT: " + input ); if( "bananas".equals(input) ){ break; } } catch( Exception e ){ System.out.println( ":: EXCEPTION: " + e.toString() ); crashcount++; if( crashcount == 5 ){ System.out.println( ":: Looks like stdin is broke" ); break; } } } System.out.println( ":: IT'S OVER!" ); return; } } The Java app should respond to receiving the test prints immediately, but it doesn't until the close statement in the Perl script. What am I doing wrong? Note: the fix can only be in the Perl script. The Java app can't be changed. Also, File::Basename is there because I'm using it in the real script.

    Read the article

  • Is there some way to make variables like $a and $b in regard to strict?

    - by Axeman
    In light of Michael Carman's comment, I have decided to rewrite the question. Note that 11 comments appear before this edit, and give credence to Michael's observation that I did not write the question in a way that made it clear what I was asking. Question: What is the standard--or cleanest way--to fake the special status that $a and $b have in regard to strict by simply importing a module? First of all some setup. The following works: #!/bin/perl use strict; print "\$a=$a\n"; print "\$b=$b\n"; If I add one more line: print "\$c=$c\n"; I get an error at compile time, which means that none of my dazzling print code gets to run. If I comment out use strict; it runs fine. Outside of strictures, $a and $b are mainly special in that sort passes the two values to be compared with those names. my @reverse_order = sort { $b <=> $a } @unsorted; Thus the main functional difference about $a and $b--even though Perl "knows their names"--is that you'd better know this when you sort, or use some of the functions in List::Util. It's only when you use strict, that $a and $b become special variables in a whole new way. They are the only variables that strict will pass over without complaining that they are not declared. : Now, I like strict, but it strikes me that if TIMTOWTDI (There is more than one way to do it) is Rule #1 in Perl, this is not very TIMTOWDI. It says that $a and $b are special and that's it. If you want to use variables you don't have to declare $a and $b are your guys. If you want to have three variables by adding $c, suddenly there's a whole other way to do it. Nevermind that in manipulating hashes $k and $v might make more sense: my %starts_upper_1_to_25 = skim { $k =~ m/^\p{IsUpper}/ && ( 1 <= $v && $v <= 25 ) } %my_hash ;` Now, I use and I like strict. But I just want $k and $v to be visible to skim for the most compact syntax. And I'd like it to be visible simply by use Hash::Helper qw<skim>; I'm not asking this question to know how to black-magic it. My "answer" below, should let you know that I know enough Perl to be dangerous. I'm asking if there is a way to make strict accept other variables, or what is the cleanest solution. The answer could well be no. If that's the case, it simply does not seem very TIMTOWTDI.

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • how to invoke an activity of a library project from an android apps

    - by Austin
    I have an open source android code that I need to use in my android apps. It has all the source code as well as resource files, manifest files and class path. It can be compiled as a separate android apps. I have constraints for using the open source. 1. I can't change a single line of code. 2. I can't use it as a separate apps. These constraints are non negotiable. What I have done is I have compiled the open source as class library(in Eclipse: Project Properties-Android- Tick check box Is Library). This has resulted in generation of .class files(in bin) for the java files and resource files. This open source has an android activity that i want to open from my application. So I have linked the directory of these sets of class files in the source section of my java build path( in .classpath). I have declared the activity in my manifest file with proper action intent filters. Now when I am trying to call activity from my code, its not working. Cleaning and rebuilding doesn't help. However, if I build the open source project and my apps in the same workspace of eclipse and link the open source in my apps in exact same manner it works fine. I am not able to identify the difference. All settings seems to be same(all files are identical in both the cases). But only in the second case it works. I have tried it as jar file also. I have build the open source as project library and exported it into a jar file(excluding manifest file). But in that case I am getting the following error UNEXPECTED TOP-LEVEL EXCEPTION: java.lang.IllegalArgumentException: already added: .... Conversion to Dalvik format failed with error 1 This I guess is coming because the android library(2.2) has been included twice in my apps( one for building my apps & another for building the open source). I dont know how to avoid this. Cleaning the project doesn't help. What i require is to use the open source and invoking it's activities in my apps without violating the constraints. If i can use the open source as bunch of .class files then great, or else any other way will do fine. Please look into it and let me know. Thanks

    Read the article

  • Get name mangling when I try to use exceptions [CodeBlocks, C++]

    - by Beetroot
    I am trying to use exceptions for the first time but even though it is quite a simple example I just cannot get it to compile, I have looked at several examples and tried coding it in many, many different ways but I am still not even sure exactly where the problem is because I get namemangling when I introduce the catch/try/throw anyway here is my code hopefully it is something really stupid :) #include "Surface.h" #include "SDL_Image.h" using namespace std; SDL_Surface* surface::Load(string fileName){ SDL_Surface* loadedSurface = IMG_Load(fileName.c_str()); if(loadedSurface == 0) throw 0; //Convert surface to same format as display loadedSurface = SDL_DisplayFormatAlpha(loadedSurface); return loadedSurface; } #include "GameState.h" #include "Surface.h" #include<iostream> using namespace std; GameState::GameState(string fileName){ try{ stateWallpaper_ = surface::Load(fileName); } catch(int& e){ cerr << "Could not load " << fileName << endl; } } Thanks in advance for any help! EDIT: Sorry I forgot to post the error message: It is In function `ZN14GameStateIntroC1Ev':| -undefined reference to `__gxx_personality_sj0'| -undefined reference to `_Unwind_SjLj_Register'| -undefined reference to `_Unwind_SjLj_Unregister'| In function `ZN14GameStateIntroC1Ev':| undefined reference to `_Unwind_SjLj_Resume'| In function `ZN14GameStateIntroC2Ev':| -undefined reference to `__gxx_personality_sj0'| -undefined reference to `_Unwind_SjLj_Register'| -undefined reference to `_Unwind_SjLj_Unregister'| obj\Release\GameStateIntro.o||In function `ZN14GameStateIntroC2Ev':| C:\Program Files (x86)\CodeBlocks\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\include\c++\3.4.5\ext\new_allocator.h|69|undefined reference to `_Unwind_SjLj_Resume'| C:\MinGW\lib\libSDLmain.a(SDL_win32_main.o)||In function `redirect_output':| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|219|undefined reference to `SDL_strlcpy'| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|220|undefined reference to `SDL_strlcat'| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|243|undefined reference to `SDL_strlcpy'| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|244|undefined reference to `SDL_strlcat'| C:\MinGW\lib\libSDLmain.a(SDL_win32_main.o)||In function `console_main':| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|296|undefined reference to `SDL_strlcpy'| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|301|undefined reference to `SDL_GetError'| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|312|undefined reference to `SDL_SetModuleHandle'| C:\MinGW\lib\libSDLmain.a(SDL_win32_main.o)||In function `WinMain@16':| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|354|undefined reference to `SDL_getenv'| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|386|undefined reference to `SDL_strlcpy'| C:\MinGW\lib\libSDLmain.a(SDL_win32_main.o)||In function `cleanup':| \Users\slouken\release\SDL\SDL-1.2.15\.\src\main\win32\SDL_win32_main.c|158|undefined reference to `SDL_Quit'| **

    Read the article

  • Could not load file or assembly ... or one of its dependencies. An attempt was made to load a progra

    - by Dan
    I am getting the following error message when compiling or attempting to run my application on Windows 7 64 bit. I've scoured the internet and many people have the same error message however none of the solutions address my problem or situation. Using VS 2010. Error 38 Could not load file or assembly 'file:///D:/Projects/Windows Projects/Weld/Components/FileAttachments/FileAttachments/FileAttachments/bin/x86/Debug/FileAttaching.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Line 1212, position 5. D:\Projects\Windows Projects\Weld\Weld\Weld.UI\frmMain.resx 1212 5 Weld.UI Ok, so I have 2 projects a UI project and a FileAttachment project. UI project has a reference to FileAttachment project. When I compile UI project in "Any CPU" mode everything works fine and it runs. I assume 'Any CPU' will run in 64bit mode when I compile as that is the platform I am using. I want to run/compile as x86 so I try to do that, so I change configuration for all projects to x86 and verify that these configurations are compiling to x86. I compile and get the error as stated above. I find it odd that it compiles and works fine in 64bit but not 32bit. However when compiled and deployed to users as 'Any CPU', if these users have x86 it still works for them no problem. I just can't compile or run as x86 on my PC. Again, I can compile as Any CPU and deploy to a 32bit PC no problem. Neither project are referencing any 64bit only dlls. Both projects are verified to be targetting 32bit dll's and .NET Framework assemblies. I need to compile and run this locally under 32bit mode. I need JIT edit/continue among other things. Here is the line of code in the resx file that is causing the problem: </data> <data name="Appearance17.Image" type="System.Drawing.Bitmap, System.Drawing" mimetype="application/x-microsoft.net.object.bytearray.base64"> <value> The resx file is verified to be generated for .NET 2.0 amnd is only referencing .NET 2.0 assemblies and not .NET 4.0 versions. Any ideas here? I've searched the net and have found hundreds of people with the same error message but a different problem.

    Read the article

  • Not seeing Sync Block in Object Layout

    - by bob-bedell
    It's my understanding the all .NET object instances begin with an 8 byte 'object header': a synch block (4 byte pointer into a SynchTableEntry table), and a type handle (4 byte pointer into the types method table). I'm not seeing this in VS 2010 RC's (CLR 4.0) debugger memory windows. Here's a simple class that will generate a 16 byte instance, less the object header. class Program { short myInt = 2; // 4 bytes long myLong = 3; // 8 bytes string myString = "aString"; // 4 byte object reference // 16 byte instance static void Main(string[] args) { new Program(); return; } } An SOS object dump tells me that the total object size is 24 bytes. That makes sense. My 16 byte instance plus an 8 byte object header. !DumpObj 0205b660 Name: Offset_Test.Program MethodTable: 000d383c EEClass: 000d13f8 Size: 24(0x18) bytes File: C:\Users\Bob\Desktop\Offset_Test\Offset_Test\bin\Debug\Offset_Test.exe Fields: MT Field Offset Type VT Attr Value Name 632020fc 4000001 10 System.Int16 1 instance 2 myInt 632050d8 4000002 4 System.Int64 1 instance 3 myLong 631fd2b8 4000003 c System.String 0 instance 0205b678 myString Here's the raw memory: 0x0205B660 000d383c 00000003 00000000 0205b678 00000002 ... And here are some annotations: offset 0 000d383c ;TypeHandle (pointer to MethodTable), 4 bytes offset 4 00000003 00000000 ;myLong, 8 bytes offset 12 0205b678 ;myString, 4 byte reference to address of "myString" on GC Heap offset 16 00000002 ;myInt, 4 bytes My object begins a address 0x0205B660. But I can only account for 20 bytes of it, the type handle and the instance fields. There is no sign of a synch block pointer. The object size is reported as 24 bytes, but the debugger is showing that it only occupies 20 bytes of memory. I'm reading Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects, and expected the first 4 bytes of my object to be a zeroed synch block pointer, as shown in Figure 8 of that article. Granted, this is an article about CLR 1.1. I'm just wondering if the difference between what I'm seeing and what this early article reports is a change in either the debugger's display of object layout, or in the way the CLR lays out objects in versions later than 1.1. Anyway, can anyone account for my 4 missing bytes?

    Read the article

  • How can I enable a debugging mode via a command-line switch for my Perl program?

    - by Michael Mao
    I am learning Perl in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debug_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I want a real minimal solution that doesn't depend on any other module than the "default". And I cannot have any control on the environment where this script will be executed...

    Read the article

  • Solution Output Directory

    - by L.E.O
    The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory. The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution. One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration. So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations? Update 1: Some extra details I guess I need to mention: We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part. This means that when building under global solution the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly. The thing is, the team developing an addin doest necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.

    Read the article

  • Socket connection to a telnet-based server hangs on read

    - by mixwhit
    I'm trying to write a simple socket-based client in Python that will connect to a telnet server. I can test the server by telnetting to its port (5007), and entering text. It responds with a NAK (error) or an AK (success), sometimes accompanied by other text. Seems very simple. I wrote a client to connect and communicate with the server, but it hangs on the first attempt to read the response. The connection is successful. Queries like getsockname and getpeername are successful. The send command returns a value that equals the number of characters I'm sending, so it seems to be sending correctly. But in the end, it always hangs when I try to read the response. I've tried using both file-based objects like readline and write (via socket.makefile), as well as using send and recv. With the file object I tried making it with "rw" and reading and writing via that object, and later tried one object for "r" and another for "w" to separate them. None of these worked. I used a packet sniffer to watch what's going on. I'm not versed in all that I'm seeing, but during a telnet session I can see my typed text and the server's text coming back. During my Python socket connection, I can see my text going to the server, but packets back don't seem to have any text in them. Any ideas on what I'm doing wrong, or any strategies to try? Here's the code I'm using (in this case, it's with send and recv): #!/usr/bin/python host = "localhost" port = 5007 msg = "HELLO EMC 1 1" msg2 = "HELLO" import socket import sys try: skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) except socket.error, e: print("Error creating socket: %s" % e) sys.exit(1) try: skt.connect((host,port)) except socket.gaierror, e: print("Address-related error connecting to server: %s" % e) sys.exit(1) except socket.error, e: print("Error connecting to socket: %s" % e) sys.exit(1) try: print(skt.send(msg)) print("SEND: %s" % msg) except socket.error, e: print("Error sending data: %s" % e) sys.exit(1) while 1: try: buf = skt.recv(1024) print("RECV: %s" % buf) except socket.error, e: print("Error receiving data: %s" % e) sys.exit(1) if not len(buf): break sys.stdout.write(buf)

    Read the article

  • how can i edit the action of the buttons in the dialog box in jquery?

    - by noob
    this code is from the demo of modal confirmation from jquery's site. <script type="text/javascript"> $(function() { $("#dialog").dialog({ bgiframe: true, resizable: false, height:140, modal: true, overlay: { backgroundColor: '#000', opacity: 0.5 }, buttons: { 'Yes': function() { $(this).dialog('close'); }, 'No': function() { $(this).dialog('close'); } } }); }); </script> <div class="demo"> <div id="dialog" title="Empty the recycle bin?"> <p><span class="ui-icon ui-icon-alert" style="float:left; margin:0 7px 20px 0;"></span>These items will be permanently deleted and cannot be recovered. Are you sure?</p> </div> <!-- Sample page content to illustrate the layering of the dialog --> <div class="hiddenInViewSource" style="padding:20px;"> <p>Sed vel diam id libero <a href="http://example.com">rutrum convallis</a>. Donec aliquet leo vel magna. Phasellus rhoncus faucibus ante. Etiam bibendum, enim faucibus aliquet rhoncus, arcu felis ultricies neque, sit amet auctor elit eros a lectus.</p> <form> <input value="text input" /><br /> <input type="checkbox" />checkbox<br /> <input type="radio" />radio<br /> <select> <option>select</option> </select><br /><br /> <textarea>textarea</textarea><br /> </form> </div><!-- End sample page content --> </div><!-- End demo --> <div class="demo-description"> <p>Confirm an action that may be destructive or important. Set the <code>modal</code> option to true, and specify primary and secondary user actions with the <code>buttons</code> option.</p> </div><!-- End demo-description --> can anyone tell me how to edit the action for the buttons? when yes is clicked i want to be redirected to test.php and when i hit no i want to be redirected to another page.

    Read the article

  • SSH X11 not working

    - by azat
    I have a home and work computer, the home computer has a static IP address. If I ssh from my work computer to my home computer, the ssh connection works but X11 applications are not displayed. In my /etc/ssh/sshd_config at home: X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes At work I have tried the following commands: xhost + home HOME_IP ssh -X home ssh -X HOME_IP ssh -Y home ssh -Y HOME_IP My /etc/ssh/ssh_config at work: Host * ForwardX11 yes ForwardX11Trusted yes My ~/.ssh/config at work: Host home HostName HOME_IP User azat PreferredAuthentications password ForwardX11 yes My ~/.Xauthority at work: -rw------- 1 azat azat 269 Jun 7 11:25 .Xauthority My ~/.Xauthority at home: -rw------- 1 azat azat 246 Jun 7 19:03 .Xauthority But it doesn't work After I make an ssh connection to home: $ echo $DISPLAY localhost:10.0 $ kate X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. kate: cannot connect to X server localhost:10.0 I use iptables at home, but I've allowed port 22. According to what I've read that's all I need. UPD. With -vvv ... debug2: callback start debug2: x11_get_proto: /usr/bin/xauth list :0 2/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 1: request x11-req confirm 1 debug2: client_session2_setup: id 1 debug2: fd 3 setting TCP_NODELAY debug2: channel 1: request pty-req confirm 1 ... When try to launch kate: debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 55486 debug2: fd 8 setting O_NONBLOCK debug3: fd 8 is O_NONBLOCK debug1: channel 2: new [x11] debug1: confirm x11 debug2: X11 connection uses different authentication protocol. X11 connection rejected because of wrong authentication. debug2: X11 rejected 2 i0/o0 debug2: channel 2: read failed debug2: channel 2: close_read debug2: channel 2: input open - drain debug2: channel 2: ibuf empty debug2: channel 2: send eof debug2: channel 2: input drain - closed debug2: channel 2: write failed debug2: channel 2: close_write debug2: channel 2: output open - closed debug2: X11 closed 2 i3/o3 debug2: channel 2: send close debug2: channel 2: rcvd close debug2: channel 2: is dead debug2: channel 2: garbage collecting debug1: channel 2: free: x11, nchannels 3 debug3: channel 2: status: The following connections are open: #1 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1) #2 x11 (t7 r2 i3/0 o3/0 fd 8/8 cc -1) # The same as above repeate about 7 times kate: cannot connect to X server localhost:10.0 UPD2 Please provide your Linux distribution & version number. Are you using a default GNOME or KDE environment for X or something else you customized yourself? azat:~$ kded4 -version Qt: 4.7.4 KDE Development Platform: 4.6.5 (4.6.5) KDE Daemon: $Id$ Are you invoking ssh directly on a command line from a terminal window? What terminal are you using? xterm, gnome-terminal, or? How did you start the terminal running in the X environment? From a menu? Hotkey? or ? From terminal emulator `yakuake` Manualy press `Ctrl + N` and write commands Can you run xeyes from the same terminal window where the ssh -X fails? `xeyes` - is not installed But `kate` or another kde app is running Are you invoking the ssh command as the same user that you're logged into the X session as? From the same user UPD3 I also download ssh sources, and using debug2() write why it's report that version is different It see some cookies, and one of them is empty, another is MIT-MAGIC-COOKIE-1

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • Can't install MySQL

    - by James Marthenal
    I have a Debian machine that I have previously installed MySQL on. In an attempt to delete it, I stupidly deleted the directories/files /etc/mysql/, /etc/init.d/mysql, /usr/lib/mysql/, /var/lib/mysql/. I then later did sudo apt-get purge mysql-server mysql-server-5.0. Now, when I try to install mysql-server, I get: $ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 The following NEW packages will be installed: mysql-server mysql-server-5.0 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.6MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.122781": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.122781 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 158138 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) ERROR: 1017 Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] Aborting 110206 19:31:13 [Note] /usr/sbin/mysqld: Shutdown complete /etc/init.d/mysql: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz (warning). Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to search for a solution via Google and have found lots of suggestions for this problem, but ultimately it seems like the problem is that by deleting the files manually, I messed up the mysql-common package. I have tried to do sudo apt-get install --reinstall mysql-common followed by installing mysql-server, but it does the exact same thing. I previously had MySQL working great, I just want to get it back to that state. Thanks so much for your help.

    Read the article

  • Unable to setup postgres on ubuntu (8.4 on 9.10)

    - by shabda
    I am trying to setup postgres on ubuntu, and I cant proceed as I cant find the location of pg_hba.conf (Postgres 8.4 on ubuntu 9.10) What I did setup postgres via aptitude This gave me ... /usr/share/postgresql/8.4# aptitude install postgresql Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libreadline5{a} postgresql postgresql-8.4{a} postgresql-client-8.4{a} postgresql-client-common{a} postgresql-common{a} 0 packages upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,159kB of archives. After unpacking 18.8MB will be used. Do you want to continue? [Y/n/?] Y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package libreadline5. (Reading database ... 17490 files and directories currently installed.) Unpacking libreadline5 (from .../libreadline5_5.2-6_amd64.deb) ... Selecting previously deselected package postgresql-client-common. Unpacking postgresql-client-common (from .../postgresql-client-common_101_all.deb) ... Selecting previously deselected package postgresql-client-8.4. Unpacking postgresql-client-8.4 (from .../postgresql-client-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql-common. Unpacking postgresql-common (from .../postgresql-common_101_all.deb) ... Selecting previously deselected package postgresql-8.4. Unpacking postgresql-8.4 (from .../postgresql-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql. Unpacking postgresql (from .../postgresql_8.4.1-1_all.deb) ... Processing triggers for man-db ... Setting up libreadline5 (5.2-6) ... Setting up postgresql-client-common (101) ... Setting up postgresql-client-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode. Setting up postgresql-common (101) ... Setting up postgresql-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode. Setting up postgresql (8.4.1-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done tried to login as su - postgres Followed by createdb mytestdb This failed with could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? So I though I need to enable local connections in pg_hba.conf, which I cant find. So I created a new file as sudo vim /etc/postgresql/8.4/main/pg_hba.conf With values local all all ident But I still cant login after restarting the service. What are next steps for me to take.

    Read the article

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • Red Hat Yum not working out of the box?

    - by Tucker
    I have a server runnning Red Hat Enterprise Linux v5.6 in the cloud. My project constraints do not allow me to use another OS. When I created the cloud server, I was able to SSH into it and access the shell. I next ran the command: sudo yum update But the command failed. About a month ago I created another server with the same machine image and didn't have that error. Why is it failing now? The following is the terminal output sudo yum update Loaded plugins: security Repository rhel-server is listed more than once in the configuration Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 662, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 502, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 168, in populate if self._check_db_version(repo, mydbtype): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 226, in _check_db_version return repo._check_db_version(mdtype) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1233, in _check_db_version repoXML = self.repoXML File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1406, in <lambda> repoXML = property(fget=lambda self: self._getRepoXML(), File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1398, in _getRepoXML self._loadRepoXML(text=self) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1388, in _loadRepoXML return self._groupLoadRepoXML(text, ["primary"]) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1372, in _groupLoadRepoXML if self._commonLoadRepoXML(text): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1208, in _commonLoadRepoXML result = self._getFileRepoXML(local, text) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 989, in _getFileRepoXML cache=self.http_caching == 'all') File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 826, in _getFile http_headers=headers, File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 412, in urlgrab return self._mirror_try(func, url, kw) File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 398, in _mirror_try return func_ref( *(fullurl,), **kwargs ) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 936, in urlgrab return self._retry(opts, retryfunc, url, filename) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 854, in _retry r = apply(func, (opts,) + args, {}) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 922, in retryfunc fo = URLGrabberFileObject(url, filename, opts) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1010, in __init__ self._do_open() File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1093, in _do_open fo, hdr = self._make_request(req, opener) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1202, in _make_request fo = opener.open(req) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/site-packages/M2Crypto/m2urllib2.py", line 82, in https_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/usr/lib64/python2.4/httplib.py", line 810, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.4/httplib.py", line 833, in _send_request self.endheaders() File "/usr/lib64/python2.4/httplib.py", line 804, in endheaders self._send_output() File "/usr/lib64/python2.4/httplib.py", line 685, in _send_output self.send(msg) File "/usr/lib64/python2.4/httplib.py", line 652, in send self.connect() File "/usr/lib64/python2.4/site-packages/M2Crypto/httpslib.py", line 47, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 174, in connect ret = self.connect_ssl() File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl return m2.ssl_connect(self.ssl, self._timeout) M2Crypto.SSL.SSLError: certificate verify failed

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >