Search Results

Search found 5264 results on 211 pages for 'erlang shell'.

Page 9/211 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Erlang Edoc in Emacs

    - by Roberto Aloi
    Let's say that I have an Erlang function, with spec. -spec foo(integer(), string()) -> boolean(). foo(_Integer, _String) -> true. My dream would be to generate the edoc from this information within Emacs automatically. The generated code should look like: %%-------------------------------------------------------------------- %% @doc %% Your description goes here %% @spec foo(_Integer::integer(), _String::string()) -> %%% boolean() %% @end %%-------------------------------------------------------------------- -spec foo(integer(), string()) -> boolean(). foo(_Integer, _String) -> true. Does a similar feature already exist?

    Read the article

  • Accurate clock in Erlang

    - by buddhabrot
    I was thinking about how to implement a process that gives the number of discrete intervals in time that occurred since it started. Am I losing accuracy here? How do I implement this without loss of accuracy after a while and after heavy client abuse. I am kind of stumped how to do this in Erlang. -module(clock). -compile([export_all]). start(Time) -> register(clock, spawn(fun() -> tick(Time, 0) end)). stop() -> clock ! stop. tick(Time, Count) -> receive nticks -> io:format("~p ticks have passed since start~n", [Count]) after 0 -> true end, receive stop -> void after Time -> tick(Time, Count + 1) end.

    Read the article

  • [Erlang - trace] How to trace for all functions in an Erlang module except for one?

    - by Dlf
    I wanted to trace for all functions in an erlang module, with dbg:tpl, but one of the internal functions took up 95% of the trace file. I then wanted to exclude only that single function and found that it was not as easy as I thought it would be. I know there are great pattern matching possibilities for arguments when tracing. Is there a similar possibility to apply pattern matching for functions? eg.: {'=/=', '$2', function_name} I am open for outside-the-box solutions as well! Thank You!

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • Handling Erlang inets http client errors

    - by Justin
    I have an Erlang app which makes a large number of http calls to external sites using inets, using the code below case http:request(get, {Url, []}, [{autoredirect, false}], []) of {ok, {{_, Code, _}, _, Body}}-> case Code of 200 -> HandlerFn(Body); _ -> {error, io:format("~s returned HTTP ~p", [Broker, Code])} end; Response -> %% block to handle unexpected responses from inets {error, io:format("~s returned ~p", [Broker, Response])} end. There is an explicit block to handle anything strange inets might return [Response]. Despite this, I still get what look like inets error reports dumped to the console [sample below]. What am I doing wrong here ? Do I need to configure some kind of inets error handler elsewhere ? Thanks. -- =ERROR REPORT==== 24-Apr-2010::06:49:47 === ** Generic server <0.6618.0 terminating ** Last message in was {connect_and_send, {request,#Ref<0.0.0.139358,<0.6613.0,0,http, {"***",80}, "****************", [],get, {http_request_h,undefined,"keep-alive", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,"news.bbc.co.uk", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,[],undefined,undefined,undefined, undefined,"0",undefined,undefined, undefined,undefined,undefined,undefined,[]}, {[],[]}, {http_options,"HTTP/1.1",infinity,false,[], undefined,false,infinity}, "*******************", [],none,[],1272088179114,undefined,undefined}} * When Server state == {state, {request,#Ref<0.0.0.139358,<0.6613.0,0,http, {"********",80}, "***************", [],get, {http_request_h,undefined,"keep-alive", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,"news.bbc.co.uk", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,[],undefined,undefined, undefined,undefined,"0",undefined, undefined,undefined,undefined,undefined, undefined,[]}, {[],[]}, {http_options,"HTTP/1.1",infinity,false,[], undefined,false,infinity}, "**********************", [],none,[],1272088179114,undefined,undefined}, undefined,undefined,undefined,undefined,undefined, {[],[]}, {[],[]}, undefined,[],nolimit,nolimit, {options, {undefined,[]}, 0,2,5,120000,2,disabled,false,inet,default, default,[]}, {timers,[],undefined}, httpc_manager,undefined} ** Reason for termination == ** {error,{connect_failed,{#Ref<0.0.0.139358,{error,nxdomain}}}} =ERROR REPORT==== 24-Apr-2010::06:49:47 === HTTPC-MANAGER handler (<0.6618.0, started) failed to connect and/or send request #Ref<0.0.0.139358 Result: {error,{connect_failed,{#Ref<0.0.0.139358,{error,nxdomain}}}}

    Read the article

  • erlang design of a card game [closed]

    - by user601836
    I would like to discuss with you a possible implementation for a card game in erlang. The only full example I found online is OpenPoker. I would like to create one myself, so here is the implementation I have in mind: A gen_server to represent a deck: when started creates a deck of cards (shuffled). And stores it in its state. provides an handle_call (draw_card) A gen_server to represent the chat room. Stores in its state the registered name of a player process (e.g. player1, player2, luke etc etc). Exports handle_cast to join the chat (executed by default when somebody joins successfully the game) and one to broadcast a chat message to all users by calling an handle_cast on the gen_server representing a player. a gen_fsm to represent a game instance. Has two states (wait_join, and turn). Exports join/1 to join the game, play_card/2 and send_msg/2. One parameter is the pid of the player process. a gen_server to represent the player. Exports only start_link/1 where the parameter is the name to use to register the process (inside the init I call join method of gen_fsm). Has different handle_calls (e.g. get_hand, draw_card) and handle_casts (e.g. play_card, deliver_msg, and send_msg) A gen_server to represent the main process. Exports (join_game/1 which calls player:start_link/1, send_msg/2 to call player's send_msg, play_card/3 to call player's play_card). What do you think of this architecture? Thanks in advance

    Read the article

  • Cryptic Erlang Errors

    - by Jim
    Okay so I started learning erlang recently but am baffled by the errors it keeps returning. I made a bunch of changes but I keep getting errors. The syntax is correct as far as I can tell but clearly I'm doing something wrong. Have a look... -module(pidprint). -export([start/0]). dostuff([]) - receive begin - io:format("~p~n", [This is a Success]) end. sender([N]) - N ! begin, io:format("~p~n", [N]). start() - StuffPid = spawn(pidprint, dostuff, []), spawn(pidprint, sender, [StuffPid]). Basically I want to compile the script, call start, spawn the "dostuff" process, pass its process identifier to the "sender" process, which then prints it out. Finally I want to send a the atom "begin" to the "dostuff" process using the process identifier initially passed into sender when I spawned it. The errors I keep getting occur when I try to use c() to compile the script. Here they are.. ./pidprint.erl:6: syntax error before: '-' ./pidprint.erl:11: syntax error before: ',' What am I doing wrong?

    Read the article

  • Adding International support in Erlang Web 1.4

    - by Roberto Aloi
    I'm trying to add international support for a website based on the Erlang Web 1.4. I would like to have a couple of links on every page (the notorious Country flags) that allow the user to set his language session variable. What I have right now is a link like: <li><a href="/session/language/en">English</a></li> Where, in the session controller I do: language(Args) -> LanguageId = proplists:get_value(id, Args), case language_is_supported(LanguageId) of false -> ok; true -> wpart:fset("session:lang", LanguageId) end, {redirect, "/"}. The problem is that, after setting the preferred language, I would like the user to be redirected to the page he was visiting before changing the language. In this case the "__path" variable doesn't help because it contains the language request and not the "previous" one. How could I resolve this situation? I'm probably using the wrong approach but I cannot thing to anything else right now.

    Read the article

  • trap "" HUP v.s Nohup ? How can I run a portion of shell script in nohub mode?

    - by Alex
    I want to run a shell script over the weekend, but I wanna make sure if the terminal loses the connection, my script won't be terminated. I use nohup for the whole script invokation, but I also want to execute some portion of my shell script in a way that if someone closes my terminal, my script runs on the background still. Here is a simple example : #!/bin/bash echo "Start the trap" trap " " HUP echo "Sleeping for 60 Seconds" sleep 60 echo "I just woke up!" Please suggest what I should do ? The trap " " HUP seems like not working when I close my terminal tab.

    Read the article

  • Using native MySQL driver in Erlang

    - by Mickey Shine
    I am using native MySQL driver (http://code.google.com/p/erlang-mysql-driver/) with mochiweb. When I tried that MySQL driver in shell mode, all woked fine. But when I write some code with Mochiweb, it reported me the following error: =CRASH REPORT==== 4-Jul-2009::04:44:29 === crasher: initial call: mochiweb_socket_server:acceptor_loop/1 pid: <0.61.0> registered_name: [] exception error: no function clause matching mysql:fetch(p1,<<"SELECT * FROM cdb_forums LIMIT 10">>) in function perly_web:loop/2 in call from mochiweb_http:headers/5 ancestors: [perly_web,perly_sup,<0.58.0>] messages: [] links: [<0.60.0>,#Port<0.965>] dictionary: [{mochiweb_request_body,undefined}, {mochiweb_request_qs,[]}, {mochiweb_request_post,[]}, {mochiweb_request_path,"/online"}, {mochiweb_request_cookie, [{"04c_sid","hG9Oyv"}, {"04c_visitedfid","2"}, {"kQx_cookietime","2592000"}, {"kQx_loginuser","admin"}, {"kQx_activationauth", "98b3mdX86fKT9dI4WyKuL61Tqxk%2BW1r6ACpHp9y8itH2xQ"}, {"smile","1D1"}]}] trap_exit: false status: running heap_size: 1597 stack_size: 24 reductions: 5188 neighbours: The code I write in Mochiweb is start(Options) -> {DocRoot, Options1} = get_option(docroot, Options), Loop = fun (Req) -> ?MODULE:loop(Req, DocRoot) end, % we’ll set our maximum to 1 million connections. (default: 2048) mochiweb_http:start([{max, 1000000}, {name, ?MODULE}, {loop, Loop} | Options1]), mysql:start_link(p1, "10.0.0.123", "root", "root", "test"). stop() -> mochiweb_http:stop(?MODULE). loop(Req, DocRoot) -> "/" ++ Path = Req:get(path), case Req:get(method) of Method when Method =:= 'GET'; Method =:= 'HEAD' -> case Path of "online" -> Result1 = mysql:fetch(p1, <<"SELECT * FROM cdb_forums LIMIT 10">>), Body1 = io:format("Result1: ~p~n", [Result1]), Req:ok({"text/plain", Body1}); The connection looks good but when I added Result1 = mysql:fetch(p1, <<"SELECT * FROM cdb_forums LIMIT 10">>), it crashed. Can someone help me? Thanks in advance~ //================================================== updated: I noticed the follwoing information. If that is correct? =PROGRESS REPORT==== 4-Jul-2009::05:49:32 === supervisor: {local,kernel_safe_sup} started: [{pid,<0.65.0>}, {name,inet_gethost_native_sup}, {mfa,{inet_gethost_native,start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] mysql_conn: greeting version "5.1.33-log" (protocol 10) salt "ne0_m'vA" caps 63487 serverchar <<8,2,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0>> salt2 "!|o;vabJ*4bt" mysql_auth send packet 1: <<5,162,0,0,64,66,15,0,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,114,111,111,116,0,20,52,235,78, 173,36,251,201,242,172,139,113,231,253,181,245,3, 91,198,111,135>> Link: {ok,<0.62.0>} =SUPERVISOR REPORT==== 4-Jul-2009::05:49:32 === Supervisor: {local,perly_sup} Context: start_error Reason: ok Offender: [{pid,undefined}, {name,perly_web}, {mfa, {perly_web,start, [[{ip,"0.0.0.0"}, {port,8000}, {docroot, "/work/mochiweb-read-only/scripts/perly/priv/www"}]]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}]

    Read the article

  • Cannot spawn an erlang supervisor from the shell.

    - by drfloob
    I've implemented a gen_server and supervisor: test_server and test_sup. I want to test them from the shell/CLI. I've written their start_link functions such that their names are registered locally. I've found that I can spawn the test_server from the command line just fine, but a spawned test_sup does nothing whatsoever. Why is this? For example, I can spawn a test_server by executing: 1> spawn(test_server, start_link, []). <0.39.0> 2> registered(). [...,test_server,...] I can interact with the server, and everything appears fine. However, if I try to do the same thing with test_sup, no new names/Pids are registered, and it looks like my test_server was not spawned at all. I'd assume I coded an error in my supervisor, but this method of starting my supervisor works perfectly fine: 1> {ok, Pid}= test_sup:start_link([]). {ok, <0.39.0>} 2> unlink(Pid). true 3> registered(). [...,test_server,test_sup,...] Why is it that I can spawn a gen_server but not a supervisor?

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Setting default working directory/drive in Emacs shell on Windows

    - by Victor K.
    Hello, how can I change a default working directory/drive for shell in Emacs (on Windows)? Normally, shell starts in the same directory as the file in current buffer. However, when my current file is on D: drive, it starts in c:. Manually changing drive to D: in shell brings me to my directory of course, but I want to avoid this extra step. Is it possible?

    Read the article

  • Shell extension to display thumbnails of PDF files

    - by hamilton
    Foxit PDF, doesn't have a shell extension to display thumbnails of PDF files in Windows Explorer (the thumbnails are shown instead of PDF document icons). Is there a shell extension that do that? i.e to see thumbnails instead of PDF icon. BTW, PDFXchange and Adobe have a shell extension such that the thumbnails are shown instead of PDF document icons.

    Read the article

  • Override LDAP shell

    - by Incredible
    I have a LDAP server and a predefined shell (bash) set in it. But there are some machine on which I want a different shell to be used whenever user login to that instead of the shell stored in LDAP. How can I do this?? Can someone give me some direction in this? Thank you

    Read the article

  • Shell Script to Start Mysql Server if not running

    - by user103373
    I have written a shell script to start mysql server & send a mail to admin user if it's restarted via shell script. What i am facing an issue if I run this shell script on terminal it's work perfectly & If same script runs via cronjob it's only sending the mail to the user & problem remains same. Is this problem relates to permission & how can i resolve it. Shell Script-------- #!/bin/bash EMAIL="[email protected]" SERVICE='mysql' if ps ax | grep -v grep | grep $SERVICE > /dev/null then echo "$SERVICE service running, everything is fine" else echo "$SERVICE is not running" /etc/init.d/mysql start cat <<EOF | msmtp -a gmail $EMAIL Subject: "Alert (Test Server) : Mysql Service is not running (Manually Restarted)" Mysql Server Restarted at: `date` EOF EXIT I am using msmtp for sending mail to the user on ubuntu 12.04 Server.

    Read the article

  • How do the environments of a standard Terminal command-line and a bash script differ?

    - by fred.bear
    I know there is something different about the environment of the Terminal command-line and the environment in a bash script, but I don't know what that difference is... Here is the example which finally led me to ask this quesiton; it may flush out some of the differences. I am trying to strip leading '0's from a number, with this command. var="000123"; var="${var##+(0)}" ; echo $var When I run this command from the Terminal's command-line, I get: 123 However, when I run it from within a script, it doesn't work; I get: 000123 I'm using Ubuntu 10.04, and tried all the following with the sam results: GNOME Terminal 2.30.2 Konsole 2.4.5 #!/bin/bash #!/bin/sh What is causing this difference? Even if some upgrade will make it work in scripts... I am trying to find out the what and why, so in future, I'll know what to look out for .

    Read the article

  • md5deep error with after compression and extraction

    - by Sai
    I am using md5deep to generate the checksum for the contents of an entire directory. After generation, if I use the checksum file and run a check, there are no errors. However, if I compress the file and extract it, it seems to generate an error about a missing file. The file is just called .md5. There is no file called .md5 inside the directory. It seems to create a file that is not existent. This problem exists when I use md5sum or md5deep. Any thoughts on why this is happening?

    Read the article

  • How to switch between the upper and lower pane in emacs?

    - by Anthony Kong
    I am using the erlang mode in Aquamacs. The mode, by default, creates a new pane and buffer "*erlang*" when I hit C-C C-K to compile an erlang file. (as seen in the attached screen shot) What is the easiest way to switch between these two panes? I do not think "C-x b" is applicable in this case because 'C-X b' then "*erlang" is slow considering I have to switch between my files and the erlang shell rather frequently.

    Read the article

  • How to make Processes Run Parallel in Erlang?

    - by Ankit S
    Hello, startTrains() -> TotalDist = 100, Trains = [trainA,trainB ], PID = spawn(fun() -> train(1,length(Trains)) end), [ PID ! {self(),TrainData,TotalDist} || TrainData <- Trains], receive {_From, Mesg} -> error_logger:info_msg("~n Mesg ~p ~n",[Mesg]) after 10500 -> refresh end. so, I created Two Processes named trainA, trainB. I want to increment these process by 5 till it gets 100. I made different processes to make each of the train (process) increments its position parallely. But I was surprised to get the output sequentially i.e process trainA ends then process trainB starts. But I want to increment themselves at simultaneously. I want to run processes like this trainA 10 trainB 0 trainA 15 trainB 5 .... trainA 100 trainB 100 but I m getting trainA 0 .... trainA 90 trainA 95 trainA 100 trainA ends trainB 0 trainB 5 trainB 10 ..... trainB 100 How to make the processes run parallel/simultaneously? Hope you get my Q's. Please help me.

    Read the article

  • Erlang OTP application design

    - by Toby Hede
    I am struggling a little coming to grips with the OTP development model as I convert some code into an OTP app. I am essentially making a web crawler and I just don't quite know where to put the code that does the actual work. I have a supervisor which starts my worker: -behaviour(supervisor). -define(CHILD(I, Type), {I, {I, start_link, []}, permanent, 5000, Type, [I]}). init(_Args) -> Children = [ ?CHILD(crawler, worker) ], RestartStrategy = {one_for_one, 0, 1}, {ok, {RestartStrategy, Children}}. In this design, the Crawler Worker is then responsible for doing the actual work: -behaviour(gen_server). start_link() -> gen_server:start_link(?MODULE, [], []). init([]) -> inets:start(), httpc:set_options([{verbose_mode,true}]), % gen_server:cast(?MODULE, crawl), % ok = do_crawl(), {ok, #state{}}. do_crawl() -> % crawl! ok. handle_cast(crawl}, State) -> ok = do_crawl(), {noreply, State}; do_crawl spawns a fairly large number of processes and requests that handle the work of crawling via http. Question, ultimately is: where should the actual crawl happen? As can be seen above I have been experimenting with different ways of triggering the actual work, but still missing some concept essential for grokering the way things fit together. Note: some of the OTP plumbing is left out for brevity - the plumbing is all there and the system all hangs together

    Read the article

  • Overuse of guards in Erlang?

    - by dagda1
    Hi, I have the following function that takes a number like 5 and creates a list of all the numbers from 1 to that number so create(5). returns [1,2,3,4,5]. I have over used guards I think and was wondering if there is a better way to write the following: create(N) -> create(1, N). create(N,M) when N =:= M -> [N]; create(N,M) when N < M -> [N] ++ create(N + 1, M). Thanks, Paul

    Read the article

  • Erlang ODBC parameter query with null parameters

    - by Schlomer
    Is it possible to pass null values to parameter queries? For example Sql = "insert into TableX values (?,?)". Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}]. % Val2 may be a float, or it may be the atom, undefined odbc:param_query(OdbcRef, Sql, Params). Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as: Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)". with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there. Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it. Having a parameterized query for each permutation is simply not an option. Any thoughts or ideas would be fantastic! Thank you!

    Read the article

  • Decode JSON with mochijson2 in Erlang

    - by Jon Romero
    I have a var that has some JSON data: A = <<"{\"job\": {\"id\": \"1\"}}">>. Using mochijson2, I decode the data: Struct = mochijson2:decode(A). And now I have this: {struct,[{<<"job">>,{struct,[{<<"id">>,<<"1">>}]}}]} I am trying to read (for example), "job" or "id". I tried using struct.get_value but it doesn't seem to work. Any ideas?

    Read the article

  • Erlang: Find intersections in a ets table

    - by Yadira Suazo
    I have an ets with the next items: [at, {other_place}, me], [other_place, {place}, {other_place}]], [at, {place}, me], [on, {surface}, {object}], [small, {object}] And I have the list [[at, door, me],[on, floor, chair],[small, bannanas]] I need to compare every item in the ets table to an item in the list and if the first one is the same atom, replace the items in round brackets. So if I have [at, door, me], it matches with [at, {other_place}, me], I have to change {other_place} for the atom door in all the ets table.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >