Search Results

Search found 1693 results on 68 pages for 'hostname'.

Page 58/68 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • Are Subversion 1.6 & Xcode 3.2 compatible?

    - by Meltemi
    Trying to get Xcode to work with Subversion server. Server: Subversion upgraded to 1.6.9 (Mac OS X Leopard 10.5.8) Client: Xcode 3.2.1 (Snow Leopard 10.6.2 with Subversion 1.6.5 though not sure that matters) Repository on server is setup and working fine via command line. However, I get an error when trying to create the Repository connection in Xcode: Error: 160043 (Unsupported FS format) Description: Expected FS format '2'; found format '4' a Google search seems to say that the server needs to be updated...but it's running 1.6.9 which is the most current version I'm aware of. Anyone know how to make this work? Is it even possible? I'm well aware of the command line usage but I would like to get Xcode & SVN talking... Revisiting this after some time: Using command line: username$ svn+ssh://hostname/Library/Subversion/Repository/test yields the same result: Description: Expected FS format '2'; found format Can anyone verify that I need to upgrade Subversion on the client machine to match version on server (1.6.9)?!? was hoping i wouldn't have to unless it was a "major" revision (ie. 1.5.x - 1.6.x)

    Read the article

  • Amazon EC2 and jbossws

    - by avjaz
    Hi - I've deployed a webservice to a Jboss instance running on Amazon EC2. The webservice works fine locally, but when I deploy on EC2, and go to the /jbossws/services page the Endpoint Address for the webservice is the private DNS of the ec2 instance (domU-X-X-X-X etc...), not the public dns (which I would like it to be). I've tried loading the wsdl by changing the private hostname to the public IP; that works, but when I try to call any of the operations I get a HostNotFoundException, I'm guessing due to the fact that the generated wsdl has the stanza: <service name='XXXService'> <port binding='tns:XXXBinding' name='XXXPort'> <soap:address location='http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal:8080/xx/xx/xx'/> </port> </service> where http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal is the internal dns of the ec2 instance. The wsdl is auto generated - Is there a JAXB annotation I can use so that I can force the generated wsdl to use the public dns of the EC2 instance? Many thanks -

    Read the article

  • How do I check if output stream of a socket is closed?

    - by Roman
    I have this code: public void post(String message) { output.close(); final String mess = message; (new Thread() { public void run() { while (true) { try { output.println(mess); System.out.println("The following message was successfully sent:"); System.out.println(mess); break; } catch (NullPointerException e) { try {Thread.sleep(1000);} catch (InterruptedException ie) {} } } } }).start(); } As you can see I close the socket in the very beginning of the code and then try to use it to send some information to another computer. The program writes me "The following message was successfully sent". It means that the NullPointerException was not thrown. So, does Java throw no exception if it tries to use a closed output stream of a socket? Is there a way to check if a socket is closed or opened? ADDED I initialize the socket in the following way: clientSideSocket = new Socket(hostname,port); PrintWriter out = new PrintWriter(clientSideSocket.getOutputStream(), true); browser.output = out;

    Read the article

  • sendto: Invalid Argument

    - by Sylvain
    Hi, I have a list<struct sockaddr_in> _peers I'd like to use for sendto() the list is filled this way struct hostent *hp; hp = gethostbyname(hostname); sockaddr_in sin; bzero(&sin, sizeof(sin)); sin.sin_family = AF_INET; sin.sin_port = htons(port); sin.sin_addr.s_addr = *(in_addr_t *)hp->h_addr; _peers.push_front(sin); and here's how I try to send: for (list<struct sockaddr_in>::iterator it = _peers.begin(); it != _peers.end(); ++it) { if (sendto(_s, "PING", 5, 0, (struct sockaddr *)&(*it), sizeof(struct sockaddr_in)) < 0) perror("sendto"); } outputs: sendto: Invalid argument If I create the struct sockaddr_in right before sendto(), everything works fine, so I guess I fail at using the list properly ... I also tested using &_peers.front() directly and still get the same error ... what am I doing wrong? Thanks in advance,

    Read the article

  • hg archive to Remote Directory

    - by Brett Daniel
    Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following: hg archive ssh://[email protected]/path/to/archive However, that does not appear to work. It instead creates a directory called ssh: in the current directory. I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way. if [[ $# != 1 ]]; then echo "Usage: $0 [user@]hostname:remote_dir" exit fi arg=$1 arg=${arg%/} # remove trailing slash host=${arg%%:*} remote_dir=${arg##*:} # zip named to match lowest directory in $remote_dir zip=${remote_dir##*/}.zip # root of archive will match zip name hg archive -t zip $zip # make $remote_dir if it doesn't exist ssh $host mkdir --parents $remote_dir # copy zip over ssh into destination scp $zip $host:$remote_dir # unzip into containing directory (will prompt for overwrite) ssh $host unzip $remote_dir/$zip -d $remote_dir/.. # clean up zips ssh $host rm $remote_dir/$zip rm $zip Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.

    Read the article

  • Java Server Client Program I/O Exception

    - by AjayP
    I made this program: http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html And it works perfectly if I put the server's hostname as 127.0.0.1 or my computers name (Ajay-PC). However these 2 methods are LAN or local only not internet. So I changed it to my internet ip. 70.128.xxx.xxx etc. But it didn't work. I checked: canyouseeme.org and it said 4444 was CLOSED. So I did a quick port forward. Portforward: Name: My Java Program Start Port: 4444 End Port: 4444 Server IP: 10.0.0.12 <-- (Yeah this is my Local IP I checked) then I tried canyouseeme.org AGAIN: and it said 4444 was OPEN I ran my server client program and it yet to work. So my problem is the client server program is not working on the internet just locally. So something is blocking it and I don't know what. Computer: Windows Vista x64 Norton AntiVirus 2010 Thanks! I'll give best answer or whatever to who ever answers the best ;) :)

    Read the article

  • Publish Maven artifacts on FTP with Hudson FTP Publisher Plugin

    - by jaguard
    I'm building a number of artefacts (zip files for different environments: test, dev) using the maven-assembly-plugin using a specialized Maven profile. These artefacts I want to copy/collect on on a FTP server keeping the version (01.07.10.16.Wed-1626) as a folder, so I need to copy from test/build/01.07.10.16.Wed-1626/ to ftp://my-ftp-server:21/projects/myserver-1.7/01.07.10.16.Wed-1626/ The layout for the Maven output is this: target/ build/ 01.07.10.16.Wed-1626/ my-server-01.07.10.16.Wed-1626-dev.zip my-server-01.07.10.16.Wed-1626-test.zip For copying the artefacts I'm using FTP Publisher Plugin but it seams I miss something since that even the build is OK and the artefacts are build without problem but the job is finishing without copying the artefacts, and in the console there is no log info about copying the artefacts My FTP publisher config (FTP repository hosts) is: Hostname: my-ftp-server Port: 21 Timeout: 10000 Root Repository Path: projects User Name: my-user Password: my-pass My Hudson job FTP publisher config (Publish artifacts to FTP) is: FTP site: my-ftp-server Files to upload Source: target/build/** Destination: myserver-1.7 1: There is any log to check if there are any FTP copy errors ? 2: There is any problem with the file pattern (source) or with the dest ?

    Read the article

  • WCF - remote service without using IIS - base address?

    - by Mark Pim
    I'm trying to get my head around the addressing of WCF services. We have a client-server setup where the server occasionally (maybe once a day) needs to push data to each client. I want to have a lightweight WCF listener service on each client hosted in an NT service to receive that data. We already have such an NT service setup hosting some local WCF services for other tasks so the overhead of this is minimal. Because of existing legacy code on the server I believe the service needs to be exposed as ASMX and use basicHttpBinding to allow it to connect. Each client is registered on the server by the user (they need to configure them individually) so discovery is not the issue. My question is, how does the addressing work? I imagine the user entering the client's address on the server in the form http://0.0.0.0/MyService or even http://hostname/MyService If so, how do I configure the client service in its App.config? Do I use localhost? If not then what is the reccommended way of exposing the service to the server? Note: I don't want to host in IIS as that adds extra requirements to the hardware required for the client. The clients will be almost certainly located on LANs, not over the public internet

    Read the article

  • How to route all subdomains to a single host using mDNS?

    - by John Mee
    I have a development webserver hosting as "myhost.local" which is found using Bonjour/mDNS. The server is running avahi-daemon. The webserver also wants to handle any subdomains of itself. Eg "cat.myhost.local" and "dog.myhost.local" and "guppy.myhost.local". Given that myhost.local is on a dynamic ip address from dhcp, is there still a way to route all requests for the subdomains to myhost.local? I'm starting to think it not currently possible... http://marc.info/?l=freedesktop-avahi&m=119561596630960&w=2 You can do this with the /etc/avahi/hosts file. Alternatively you can use avahi-publish-host-name. No, he cannot. Since he wants to define an alias, not a new hostname. I.e. he only wants to register an A RR, no reverse PTR RR. But if you stick something into /etc/avahi/hosts then it registers both, and detects a collision if the PTR RR is non-unique, which would be the case for an alias.

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • ASP.NET Emails blocked by Spam filter on Exchange

    - by Amadiere
    I'm trying to send an email via some C# ASP.NET code. This is being sent to our internal mailrelay server, with our standard "from" address (e.g. [email protected]). In some instances, this is getting through OK, in others, it's getting blocked by the Spam Filter. An example of our Web.config <mailSettings> <smtp from="[email protected]"> <network host="mailrelay.domain.com" defaultCredentials="true" /> </smtp> </mailSettings> I've spoken with our Exchange Server team and they inform me that on occasions, our mail looks sufficiently like spam and is automatically blocked. The algorithm appears to be points based and blocks on a score of 45. 20 points are instantly added because our system is not sending the hostname with the domain name suffixed. e.g. the server is hoping for myServerName.domain.com, but despite being part of that domain, the server is sending from myServerName. I've been asked to look at altering the EHLO string that is sent and/or influencing the host so that it is its fully qualified name. However, this makes little sense to me, and although I understand the concept of what I need to change - I don't know where to begin looking for the fix.

    Read the article

  • Executing bat file and returning the prompt

    - by Lieven Cardoen
    I have a problem with cruisecontrol where an ant scripts executes a bat file that doesn't give me the prompt back. As a result, the project in cruisecontrol keeps on bulding forever until I restart cruisecontrol. How can I resolve this? It's a startup.bat from wowza (Streaming Server) that I'm executing: @echo off call setenv.bat if not %WMSENVOK% == "true" goto end set _WINDOWNAME="Wowza Media Server 2" set _EXESERVER= if "%1"=="newwindow" ( set _EXESERVER=start %_WINDOWNAME% shift ) set CLASSPATH="%WMSAPP_HOME%\bin\wms-bootstrap.jar" rem cacls jmxremote.password /P username:R rem cacls jmxremote.access /P username:R rem NOTE: Here you can configure the JVM's built in JMX interface. rem See the "Server Management Console and Monitoring" chapter rem of the "User's Guide" for more information on how to configure the rem remote JMX interface in the [install-dir]/conf/Server.xml file. set JMXOPTIONS=-Dcom.sun.management.jmxremote=true rem set JMXOPTIONS=%JMXOPTIONS% -Djava.rmi.server.hostname=192.168.1.7 rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.port=1099 rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.authenticate=false rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.ssl=false rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.password.file= "%WMSCONFIG_HOME%/conf/jmxremote.password" rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.access.file= "%WMSCONFIG_HOME%/conf/jmxremote.access" rem log interceptor com.wowza.wms.logging.LogNotify - see Javadocs for ILogNotify %_EXESERVER% "%_EXECJAVA%" %JAVA_OPTS% %JMXOPTIONS% -Dcom.wowza.wms.AppHome="%WMSAPP_HOME%" -Dcom.wowza.wms.ConfigURL="%WMSCONFIG_URL%" -Dcom.wowza.wms.ConfigHome="%WMSCONFIG_HOME%" -cp %CLASSPATH% com.wowza.wms.bootstrap.Bootstrap start :end

    Read the article

  • CFHost DNS Resolution - When is it OK to use synchronous API?

    - by Jasarien
    I went to the iPhone Developer Tech Talk a few months ago and asked one of the gurus there about the lack of NSHost on the iPhone. Some code I was porting to the iPhone made significant use of NSHost throughout its networking code. I was told that NSHost is on the iPhone, but its private. I was also told that NSHost is a synchronous API and that I shouldn't use it anyway. (If anyone could elaborate on why it shouldn't be used, as a bonus, that'd be great.) I can see the caveats of using synchronous API's on the main thread in that they'll block until complete - and that's never a good thing with network code because there are so many factors that could cause the API to block the thread for a significant amount of time. My solution was to write a wrapper around CFHost's asynchronous resolution functions. The result works quite well, and I'm considering releasing it into the public domain. But my question is this: Say my app only resolves a hostname once per run, during the connecting phase, and then cache's it for the rest of the session. During the time it is resolving, a modal screen is shown telling the user "Connecting" with a nice spinner. Does it really matter whether or not the resolution is asynchronous?? The user has to wait to connect anyway, and the resolution is only done on the first connection. Subsequent connections use the cached result of the resolution. When is it OK to be synchronous and when should things be asynchronous?

    Read the article

  • Changing customErrors in web.config semi-dynamically

    - by Tom Ritter
    The basic idea is we have a test enviroment which mimics Production so customErrors="RemoteOnly". We just built a test harness that runs against the Test enviroment and detects breaks. We would like it to be able to pull back the detailed error. But we don't want to turn customErrors="On" because then it doesn't mimic Production. I've looked around and thought a lot, and everything I've come up with isn't possible. Am I wrong about any of these points? We can't turn customErrors on at runtime because when you call configuration.Save() - it writes the web.config to disk and now it's Off for every request. We can't symlink the files into a new top level directory with it's own web.config because we're on windows and subversion on windows doesn't do symlinks. We can't use URL-Mapping to make an empty folder dir2 with its own web.config and make the files in dir1 appear to be in dir2 - the web.config doesn't apply We can't copy all the aspx files into dir2 with it's own web.config because none of the links would be consistent and it's a horrible hacky solution. We can't change customErrors in web.config based on hostname (e.g. add another dns entry to the test server) because it's not possible/supported We can't do any virtual directory shenanigans to make it work. If I'm not, is there a way to accomplish what I'm trying to do? Turn on customErrors site-wide under certain circumstances (dns name or even a querystring value)?

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • More than 100 connection to sql server 2008 in "sleeping" status - Solved

    - by Allende
    I have a big trouble here, well at my server. I have an ASP .net web (framework 4.x) running on muy server, all the transactions/select/update/insert are made with ADO.NET. Well my problem is that after being using for a while (a couple of updates/selects/inserts) sometimes I got more than 100 connections on "sleeping" status when check for the connections on sql server with this query: SELECT spid, a.status, hostname, program_name, cmd, cpu, physical_io, blocked, b.name, loginame FROM master.dbo.sysprocesses a INNER JOIN master.dbo.sysdatabases b ON a.dbid = b.dbid where program_name like '%TMS%' ORDER BY spid I've been checking my code and closing every time I make a connection, I'm gonna test the new class, but I'm afraid the problem doesn't be fixed. It suppose that the connection pooling, keep the connections to re-use them, but until I see don't re-use them always. Any idea besides check for close all the connections open after use them? SOLVED(now I have just one and beautiful connection on "sleeping" status): Besides the anwser of David Stratton, I would like to share this link that help explain really well how the connection pool it works: http://dinesql.blogspot.com/2010/07/sql-server-sleeping-status-and.html Just to be short, you need to close every connection (sql connection objects) in order that the connection pool can re-use the connection and use the same connectinos string, to ensure this is highly recommended use one of the webConfig. Be careful with dataReaders you sould close its connection to (that was what make got out of my mind for while).

    Read the article

  • WCF Discovery finds endpoint but host is "localhost"

    - by Flo
    I am trying to use the Discovery feature in WCF using http://msdn.microsoft.com/en-us/library/dd456783(v=VS.100).aspx as a starting point. It works fine on my machine, but then I wanted to run the service on a different machine. The service was discovered properly but the hostname of the found service is always "localhost" which is of course not much use. Service Endpoint: var endpointAddress = new EndpointAddress(new UriBuilder { Scheme = Uri.UriSchemeNetTcp, Port = port}.Uri); var endpoint = new ServiceEndpoint(ContractDescription.GetContract(typeof(IServiceInterface)), new NetTcpBinding (), endpointAddress); Client: static EndpointAddress FindServiceAddress<T>() { Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); DiscoveryClient discoveryClient = new DiscoveryClient(new UdpDiscoveryEndpoint()); // Find endpoints FindResponse findResponse = discoveryClient.Find(new FindCriteria(typeof(T))); Console.WriteLine(string.Format("Searched for {0} seconds. Found {1} Endpoint(s).",stopwatch.ElapsedMilliseconds / 1000,findResponse.Endpoints.Count)); if (findResponse.Endpoints.Count > 0) { return findResponse.Endpoints[0].Address; } return null; } Should I simply set the Host to System.Environment.MachineName?

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Problem executing script using Python and subprocces.call yet works in Bash

    - by Antoine Benkemoun
    Hello, For the first time, I am asking a little bit of help over here as I am more of a ServerFault person. I am doing some scripting in Python and I've been loving the language so far yet I have this little problem which is keeping my script from working. Here is the code line in question : subprocess.call('xen-create-image --hostname '+nom+' --memory '+memory+' --partitions=/root/scripts/part.tmp --ip '+ip+' --netmask '+netmask+' --gateway '+gateway+' --passwd',shell=True) I have tried the same thing with os.popen. All the variables are correctly set. When I execute the command in question in my regular Linux shell, it works perfectly fine but when I execute it using my Python scripts, I get bizarre errors. I even replaced subprocess.call() by the print function to make sure I am using the exact output of the command. I went looking into environment variables of my shell but they are pretty much the same... I'll post the error I am getting but I'm not sure it's relevant to my problem. Use of uninitialized value $lines[0] in substitution (s///) at /usr/share/perl5/Config/IniFiles.pm line 614. Use of uninitialized value $_ in pattern match (m//) at /usr/share/perl5/Config/IniFiles.pm line 628. I am not a Python expert so I'm most likely missing something here. Thank you in advance for your help, Antoine

    Read the article

  • Apache multiple URL to one domain redirect

    - by Christian Moser
    For the last two day, I've been spending a lot of time to solve my problem, maybe someone can help me. Problem: I need to redirect different url's to one tomcat webbase-dir used for artifactory. following urls should point to the tomcat/artifactory webapp: maven-repo.example.local ; maven-repo.example.local/artifactory ; srv-example/artifactory Where maven-repo.example.local is the dns for the server-hostname: "srv-example" I'm accessing the tomcat app through the JK_mod module. The webapp is in the ROOT directory This is what I've got so far: <VirtualHost *:80> #If URL contains "artifactory" strip down and redirect RewriteEngine on RewriteCond %{HTTP_HOST} ^\artifactory\$ [NC] # (how can I remove 'artifactory' from the redirected parameters? ) RewriteRule ^(.*)$ http://maven-repo.example.local/$1 [R=301,L] ServerName localhost ErrorLog "logs/redirect-error_log" </VirtualHost> <VirtualHost *:80> ServerName maven-repo.example.local ErrorLog "logs/maven-repo.example.local-error.log" CustomLog "logs/maven-repo.example.local-access.log" common #calling tomcat webapp in ROOT JkMount /* ajp13w </VirtualHost> The webapp is working with "maven-repo.example.local", but with "maven-repo.example.local/artifactory" tomcat gives a 404 - "The requested resource () is not available." It seems that the mod_rewrite doesn't have taken any effect, even if I redirect to another page, e.g google.com I'm testing on windows 7 with maven-repo.example.local added in the "system32/drivers/hosts" file Thanks in advance!

    Read the article

  • Setting up ehcache replication - what multicast settings do I need?

    - by Darren Greaves
    I am trying to set up ehcache replication as documented here: http://ehcache.sourceforge.net/EhcacheUserGuide.html#id.s22.2 This is on a Windows machine but will ultimately run on Solaris in production. The instructions say to set up a provider as follows: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> And a listener like this: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=localhost, port=40001, socketTimeoutMillis=2000"/> My questions are: Are the multicast IP address and port arbitrary (I know the address has to live within a specific range but do they have to be specific numbers)? Do they need to be set up in some way by our system administrator (I am on an office network)? I want to test it locally so am running two separate tomcat instances with the above config. What do I need to change in each one? I know both the listeners can't listen on the same port - but what about the provider? Also, are the listener ports arbitrary too? I've tried setting it up as above but in my testing the caches don't appear to be replicated - the value added in one tomcat's cache is not present in the other cache. Is there anything I can do to debug this situation (other than packet sniffing)? Thanks in advance for any help, been tearing my hair out over this one!

    Read the article

  • How to exclude a folder in svn checkout in maven?

    - by Udo Fholl
    Hi all, Im using maven to checkout some projects. I don't want maven to checkout a folder. But it seems to ignore the excludes tag in configuration. This is the svn structure: trunk/ | L_ folder_to_include | L_ folder_to_ignore And here it goes a sample of the pom.xml: <execution> <id>checkout_application</id> <configuration> <connectionUrl>hostname</connectionUrl> <checkoutDirectory>checkout_folder</checkoutDirectory> <excludes>folder_to_ignore</excludes> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> Sorry for the poor formatting and my english. I wasn't able to insert proper tabulation (four spaces?). Thank you! Udo.

    Read the article

  • Parsing email with Python

    - by Manuel Ceron
    I'm writing a Python script to process emails returned from Procmail. As suggested in this question, I'm using the following Procmail config: :0: |$HOME/process_mail.py My process_mail.py script is receiving an email via stdin like this: From hostname Tue Jun 15 21:43:30 2010 Received: (qmail 8580 invoked from network); 15 Jun 2010 21:43:22 -0400 Received: from mail-fx0-f44.google.com (209.85.161.44) by ip-73-187-35-131.ip.secureserver.net with SMTP; 15 Jun 2010 21:43:22 -0400 Received: by fxm19 with SMTP id 19so170709fxm.3 for <[email protected]>; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.84.1 with SMTP id m1mr2774225mul.26.1276652853684; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) Received: by 10.123.143.4 with HTTP; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) Date: Tue, 15 Jun 2010 20:47:33 -0500 Message-ID: <[email protected]> Subject: TEST 12 From: Full Name <[email protected]> To: [email protected] Content-Type: text/plain; charset=ISO-8859-1 ONE TWO THREE I'm trying to parse the message in this way: >>> import email >>> msg = email.message_from_string(full_message) I want to get message fields like 'From', 'To' and 'Subject'. However, the message object does not contain any of these fields. What am I doing wrong?

    Read the article

  • Why do I get "Error 6060" when I try to use DBD::Advantage with a 64-bit perl on Linux?

    - by WarheadsSE
    I realize that I am attempting to go beyond the "supported" behavior of the manf's released drivers for Perl, after all they have only released it in package with x86 .so's. However, since I cannot use their package with x64 Perl on a RHEL 5.4 x86_64 box, and maintaining a seperate install of x86 Perl just for this one package, I have made an attempt to get this puppy working thanks to released 64-bit .so's that accompany other driver packages for Advantage. What I have done to this point: download beta 10 DBI drivers, in 32 download beta 10 PHP extension (it contains 32 and x86_64) copy the required DLLs into the ads-lib location (eg /usr/local/ads/lib64) compile the Perl DBI driver with the path to the lib64's .so's Good compilation, good install, good use. The problem is that I always get : failed: [iAnywhere Solutions][Advantage SQL][ASA] Error 6060: Advantage Database Server not available on specified server. axServerConnect (SQL-HY000)(DBD: db_login/SQLConnect err=-1) Does anyone have any ideas? EDIT: fixed package name in post title EDIT: Updated title. It appears that it's not just the x64 perl, but the RHEL 5.4 underneath that may be interfering. As commented below, I managed to shoe-horn a x86 perl onto the system, and compile the DBD::Advantage 9.99, and later replacing that with 9.10, and none of these x86 would connect either. Neither library (9.99 or 9.10) in either bit-edness will connect from this x86_64 server to the windows server's UNC path. I have successfully mounted this share without problems, but still I cannot seem to connect to the 9.1. I have tried: \hostname\PATH \FQDN\PATH \IP\PATH and all of these variations with the port (default) 6262 included. My windows machine connects fine, with both 9.1 and 9.99 from strawberry perl.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >