Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 127/880 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Linux CentOS strange memory readings

    - by user2008937
    I am actually a young junior sys admin. I have a question - i am trying to understand how linux deals with memory... while playing around different monitoring programs I found some strange thing. When I run top on my laptop it shows me that FIREFOX process with pid 8778 takes 18,3% of memory (%MEM column). grep "MemTotal" /proc/meminfo Above command give me 1848336kb/1024 = 1805megs of memory (its ok - i have 2 gigs of ram). So if the firefox process takes 18,3% of MEM(according to tops %MEM column) then it takes 0.183 * 1805 which is approximately 325mb of memory. Quite a lot as for firefox... But well, in Linux there are lots of shared libraries that programs commonly uses (like famous libc). And those libraries are added to memory utilization of every process that uses it in the system, despite they are actually reading same file(single object in memory). So top may show too big mem utilization because of those shared libraries. Well, it is time to use PMAP which should show us the real mem utilization of process. But.. pmap -d $(pidof firefox) mapped: 983460K writeable/private: 757164K shared: 66416K so pmap shows that 983460/1024=993MB of memory is mapped to this process. It is in fact much bigger than mem utilization showed by top. Whats wrong here? How pmap can show more than top? even when top adds also the shared libraries (which in fact are single objects in memory) for each process that uses it? and pmap omits it? Regards Krzysztof

    Read the article

  • SQL Server log backups “stalling”

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • use Amazon EC2 tools in Capistrano to get the servers to push the code

    - by APZ
    I am trying to use EC2 tools to get all the machines with a particular tag in some type of array in /config/deploy/prod.rb file in Capistrano. Something like this: In prod.rb file: //untested command workers-array[]=$(ec2-describe-instances -F vpc-id=1234 -F tag:Env=prod -F tag:SystemType=worker) for(i=0;i<workers-array.len;i++){ role :worker-A, workers-array[i] } I am not sure how we can do this in capistrano, am new to ruby too. Guys any help on this would be really appreciated.

    Read the article

  • Resque Runtime Error at /workers: wrong number of arguments for 'exists' command

    - by Superflux
    I'm having a runtime errror when i'm looking at the "workers" tab on resque-web (localhost). Everything else works. Edit: when this error occurs, i also have some (3 or 4) unknown workers 'not working'. I think they are responsible for the error but i don't understand how they got here Can you help me on this ? Did i do something wrong ? Config: Resque 1.8.5 as a gem in a rails 2.3.8 app on Snow Leopard redis 1.0.7 / rack 1.1 / sinatra 1.0 / vegas 0.1.7 file: client.rb location: format_error_reply line: 558 BACKTRACE: * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_error_reply * 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end 556. 557. def format_error_reply(line) 558. raise "-" + line.strip 559. end 560. 561. def format_status_reply(line) 562. line.strip 563. end 564. 565. def format_integer_reply(line) * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_reply * 541. 542. def reconnect 543. disconnect && connect_to_server 544. end 545. 546. def format_reply(reply_type, line) 547. case reply_type 548. when MINUS then format_error_reply(line) 549. when PLUS then format_status_reply(line) 550. when COLON then format_integer_reply(line) 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in read_reply * 478. disconnect 479. 480. raise Errno::EAGAIN, "Timeout reading from the socket" 481. end 482. 483. raise Errno::ECONNRESET, "Connection lost" unless reply_type 484. 485. format_reply(reply_type, @sock.gets) 486. end 487. 488. 489. if "".respond_to?(:bytesize) 490. def get_size(string) 491. string.bytesize 492. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in map * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end 447. 448. return pipeline ? results : results[0] 449. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in synchronize * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in maybe_lock * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 432. end 433. # When in Pub/Sub mode we don't read replies synchronously. 434. if @pubsub 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in call_command * 336. # try to reconnect just one time, otherwise let the error araise. 337. def call_command(argv) 338. log(argv.inspect, :debug) 339. 340. connect_to_server unless connected? 341. 342. begin 343. raw_call_command(argv.dup) 344. rescue Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED 345. if reconnect 346. raw_call_command(argv.dup) 347. else 348. raise Errno::ECONNRESET 349. end 350. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in method_missing * 385. connect_to(@host, @port) 386. call_command([:auth, @password]) if @password 387. call_command([:select, @db]) if @db != 0 388. @sock 389. end 390. 391. def method_missing(*argv) 392. call_command(argv) 393. end 394. 395. def raw_call_command(argvp) 396. if argvp[0].is_a?(Array) 397. argvv = argvp 398. pipeline = true 399. else * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in send * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in method_missing * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/worker.rb in state * 416. def idle? 417. state == :idle 418. end 419. 420. # Returns a symbol representing the current worker state, 421. # which can be either :working or :idle 422. def state 423. redis.exists("worker:#{self}") ? :working : :idle 424. end 425. 426. # Is this worker the same as another worker? 427. def ==(other) 428. to_s == other.to_s 429. end 430. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * 46. <tr> 47. <th>&nbsp;</th> 48. <th>Where</th> 49. <th>Queues</th> 50. <th>Processing</th> 51. </tr> 52. <% for worker in (workers = resque.workers.sort_by { |w| w.to_s }) %> 53. <tr class="<%=state = worker.state%>"> 54. <td class='icon'><img src="<%=u state %>.png" alt="<%= state %>" title="<%= state %>"></td> 55. 56. <% host, pid, queues = worker.to_s.split(':') %> 57. <td class='where'><a href="<%=u "workers/#{worker}"%>"><%= host %>:<%= pid %></a></td> 58. <td class='queues'><%= queues.split(',').map { |q| '<a class="queue-tag" href="' + u("/queues/#{q}") + '">' + q + '</a>'}.join('') %></td> 59. 60. <td class='process'> * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in each * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in send * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in evaluate * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in erb * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in show * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in GET /workers * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in each * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in dispatch! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in synchronize * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/content_length.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/chunked.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in process * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in each * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in run * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in run! * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in start * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in new * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in nil * /usr/bin/resque-web in load

    Read the article

  • Task queue java

    - by user268515
    Hi i'm new to Task queue java API i tried a simple Example for it. My idea is to redirect the queue file to a servlet and to print some statement in the servlet.But it doesn't work. i mapped web.xml and used default queue I didnt get any Error but the file is not redirected to servlet . this is the codee i followed taskq.java public class taskq extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse resp)throwsIOException { Queue queue = QueueFactory.getDefaultQueue(); System.out.println("taskqueue"); queue.add(url("/worker")); } worker.java public class worker extends HttpServlet { private static final long serialVersionUID = 1L; public String s; public void doGet(HttpServletRequest req, HttpServletResponse resp)throws IOException { String s="crimsom"; System.out.println(s); } } Please Help me on this issue. Regards Sharun.

    Read the article

  • ruby on rails group by with null values problem

    - by winter sun
    I have an hour table in witch I store user time tracking information, the table consists from the following cells project_id task_id (optional can be null) worker_id reported_date working_hours each worker enters sevral records per day so generally the table is looking like this id project_id worker_id task_id reported_date working hours; == =========== ========= ========= ============= ============== 1 1 1 1 10/10/2011 4 2 1 1 1 10/10/2011 14 3 1 1 10/10/2011 4 4 1 1 10/10/2011 14 the task_id is not a must field so there can be times when the user is not selecting it and ther task_id cell is empty now i need to display the data by using group by clouse so the resualt will be somthing like this project_id worker_id task_id working hours ========== ========= ========= ============== 1 1 1 18 1 1 18 I did the folowing group by condition @group_hours=Hour.group('project_id,worker_id,task_id)').select('project_id, task_id ,worker_id,sum(working_hours)as working_hours_sum') My view looks like this <% @group_hours.each do |b| % <%= b.project.name if b.project % <%= b.worker.First_name if b.worker % <%= b.task.name if b.task % <%= b.working_hours_sum % <%end% This it is working but only if the task_id is not null when task id is null it present all the records without grouping them like this project_id worker_id task_id working hours =========== ========= ========= ============== 1 1 1 18 1 1 4 1 1 14 I will appreciate any kind of solution to this problem

    Read the article

  • Java anonymous class efficiency implications

    - by Po
    Is there any difference in efficiency (e.g. execution time, code size, etc.) between these two ways of doing things? Below are contrived examples that create objects and do nothing, but my actual scenarios may be creating new Threads, Listeners, etc. Assume the following pieces of code happen in a loop so that it might make a difference. Using anonymous objects: void doSomething() { for (/* Assume some loop */) { final Object obj1, obj2; // some free variables IWorker anonymousWorker = new IWorker() { doWork() { // do things that refer to obj1 and obj2 } }; } } Defining a class first: void doSomething() { for (/* Assume some loop */) { Object obj1, obj2; IWorker worker = new Worker(obj1, obj2); } } static class Worker implements IWorker { private Object obj1, obj2; public CustomObject(Object obj1, Object obj2) {/* blah blah */} @Override public void doWork() {} }; Thank you :)

    Read the article

  • For Loop help In a Hash Cracker Homework.

    - by aaron burns
    On the homework I am working on we are making a hash cracker. I am implementing it so as to have my cracker. java call worker.java. Worker.java implements Runnable. Worker is to take the start and end of a list of char, the hash it is to crack, and the max length of the password that made the hash. I know I want to do a loop in run() BUT I cannot think of how I would do it so it would go to the given max pasword length. I have posted the code I have so far. Any directions or areas I should look into.... I thought there was a way to do this with a certain way to write the loop but I don't know or can't find the correct syntax. Oh.. also. In main I divide up so x amount of threads can be chosen and I know that as of write now it only works for an even number of the 40 possible char given. package HashCracker; import java.util.*; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; public class Cracker { // Array of chars used to produce strings public static final char[] CHARS = "abcdefghijklmnopqrstuvwxyz0123456789.,-!".toCharArray(); public static final int numOfChar=40; /* Given a byte[] array, produces a hex String, such as "234a6f". with 2 chars for each byte in the array. (provided code) */ public static String hexToString(byte[] bytes) { StringBuffer buff = new StringBuffer(); for (int i=0; i<bytes.length; i++) { int val = bytes[i]; val = val & 0xff; // remove higher bits, sign if (val<16) buff.append('0'); // leading 0 buff.append(Integer.toString(val, 16)); } return buff.toString(); } /* Given a string of hex byte values such as "24a26f", creates a byte[] array of those values, one byte value -128..127 for each 2 chars. (provided code) */ public static byte[] hexToArray(String hex) { byte[] result = new byte[hex.length()/2]; for (int i=0; i<hex.length(); i+=2) { result[i/2] = (byte) Integer.parseInt(hex.substring(i, i+2), 16); } return result; } public static void main(String args[]) throws NoSuchAlgorithmException { if(args.length==1)//Hash Maker { //create a byte array , meassage digestand put password into it //and get out a hash value printed to the screen using provided methods. byte[] myByteArray=args[0].getBytes(); MessageDigest hasher=MessageDigest.getInstance("SHA-1"); hasher.update(myByteArray); byte[] digestedByte=hasher.digest(); String hashValue=Cracker.hexToString(digestedByte); System.out.println(hashValue); } else//Hash Cracker { ArrayList<Thread> myRunnables=new ArrayList<Thread>(); int numOfThreads = Integer.parseInt(args[2]); int charPerThread=Cracker.numOfChar/numOfThreads; int start=0; int end=charPerThread-1; for(int i=0; i<numOfThreads; i++) { //creates, stores and starts threads. Runnable tempWorker=new Worker(start, end, args[1], Integer.parseInt(args[1])); Thread temp=new Thread(tempWorker); myRunnables.add(temp); temp.start(); start=end+1; end=end+charPerThread; } } } import java.util.*; public class Worker implements Runnable{ private int charStart; private int charEnd; private String Hash2Crack; private int maxLength; public Worker(int start, int end, String hashValue, int maxPWlength) { charStart=start; charEnd=end; Hash2Crack=hashValue; maxLength=maxPWlength; } public void run() { byte[] myHash2Crack_=Cracker.hexToArray(Hash2Crack); for(int i=charStart; i<charEnd+1; i++) { Cracker.numOfChar[i]////// this is where I am stuck. } } }

    Read the article

  • Supervising multiple gen_servers with same module / different arguments

    - by Justin
    Hi, I have a OTP application comprising a single supervisor supervising a small number of gen_servers. A typical child specification is as follows: {my_server, {my_server, start_link, [123]}, permanent, 5000, worker, [my_server]} No problems so far. I now want to an add extra gen_server to the supervisor structure, using the same module Module/Fn as above, but different arguments, eg {my_server_2, {my_server, start_link, [123]}, permanent, 5000, worker, [my_server_2]} I thought this would work, but no: =SUPERVISOR REPORT==== 15-Apr-2010::16:50:13 === Supervisor: {local,my_sup} Context: start_error Reason: {already_started,<0.179.0>} Offender: [{pid,undefined}, {name,my_server_2}, {mfa,{my_server,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] Do the module arguments in the second element of each child specification need to be different ? Thanks, Justin

    Read the article

  • Better way to make a bash script self-tracing?

    - by Kevin Little
    I have certain critical bash scripts that are invoked by code I don't control, and where I can't see their console output. I want a complete trace of what these scripts did for later analysis. To do this I want to make each script self-tracing. Here is what I am currently doing: #!/bin/bash # if last arg is not '_worker_', relaunch with stdout and stderr # redirected to my log file... if [[ "$BASH_ARGV" != "_worker_" ]]; then $0 "$@" _worker_ >>/some_log_file 2>&1 # add tee if console output wanted exit $? fi # rest of script follows... Is there a better, cleaner way to do this?

    Read the article

  • Using thread inter-communication to increase my server app's IO throughput; not sure how

    - by Howard Guo
    My server application creates a new thread for each incoming connection. Incoming requests are serialized in a BlockingQueue. There is one worker thread taking items from the queue, produce a response and send the response through socket. I have noticed a throughput issue: Currently, worker thread is responsible of sending the response message through socket, thus severely wasting processing power and throughput. I am considering: rather than sending the response itself, why not telling network IO threads to send the response? However, when I think about thread inter-communication, I cannot yet figure out how to approach it: Worker thread will produce a response, but how will it inform the response message to IO thread? Is there a standard/best practice? Thank you.

    Read the article

  • Swing Timer in Conjunction with Possible Long-running Background Task

    - by javacavaj
    I need to perform a task repeatedly that affects both GUI-related and non GUI-related objects. One caveat is that no action should performed if the previous task had not completed when the next timer event is fired. My initial thoughts are to use a SwingTimer in conjunction with a javax.swing.SwingWorker object. The general setup would look like this. class { timer = new Timer(speed, this); timer.start(); public void actionPerformed(ActionEvent e) { SwingWorker worker = new SwingWorker() { @Override public ImageIcon[] doInBackground() { // potential long running task } @Override public void done() { // update GUI on event dispatch thread when complete } } } Some potential issues I see with this approach are: 1) Multiple SwingWorkers will be active if a worker has not completed before the next ActionEvent is fired by the timer. 2) A SwingWorker is only designed to be executed once, so holding a reference to the worker and reusing (is not?) a viable option. Is there a better way to achieve this?

    Read the article

  • Thread Question

    - by Polaris
    I have method which create background thread to make some action. In this background thread I create object. But this object while creating in runtime give me an exception : The calling thread must be STA, because many UI components require this. I know that I must use Dispatcher to make reflect something to UI. But in this case I just create an object and dont iteract with UI. This is my code: public void SomeMethod() { BackgroundWorker worker = new BackgroundWorker(); worker.DoWork += new DoWorkEventHandler(Background_Method); worker.RunWorkerAsync(); } void Background_Method(object sender, DoWorkEventArgs e) { TreeView tv = new TreeView(); } How can I create objects in background thread? I use WPF application

    Read the article

  • Delayed Jobs is not finding Records and failing..

    - by Trip
    In my app, delayed jobs isn't running automatically on my server anymore. It used to.. When I manually ssh in, and perform rake jobs:work I return this : * Starting job worker host:ip-(censored) pid:21458 * [Worker(host:ip-(censored) pid:21458)] acquired lock on PhotoJob * [JOB] host:ip-(censored) pid:21458 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9237 - 4 failed attempts This returns roughly 20 times over for what I think is several jobs. Then I get a few of these: [Worker(host:ip-(censored) pid:21458)] failed to acquire exclusive lock for PhotoJob And then finally one of these : 12 jobs processed at 73.6807 j/s, 12 failed ... Any ideas what I should be mulling over? Thanks so much!

    Read the article

  • Using MySQL as a job queue

    - by user237815
    I'd like to use MySQL as a job queue. Multiple machines will be producing and consuming jobs. Jobs need to be scheduled; some may run every hour, some every day, etc. It seems fairly straightforward: for each job, have a "nextFireTime" column, and have worker machines search for the job with the nextFireTime, change the status of the record to "inProcess", and then update the nextFireTime when the job ends. The problem comes in when a worker dies silently. It won't be able to update the nextFireTime or set the status back to "idle". Unfortunately, jobs can be long-running, so a reaper thread that looks for jobs that have been inProcess too long isn't an option. There's no timeout value that would work. Can anyone suggest a design pattern that would properly handle unreliable worker machines?

    Read the article

  • What to use to wait on a indeterminate number of tasks?

    - by Scott Chamberlain
    I am still fairly new to parallel computing so I am not too sure which tool to use for the job. I have a System.Threading.Tasks.Task that needs to wait for n number number of tasks to finish before starting. The tricky part is some of its dependencies may start after this task starts (You are guaranteed to never hit 0 dependent tasks until they are all done). Here is kind of what is happening Parent thread creates somewhere between 1 and (NUMBER_OF_CPU_CORES - 1) tasks. Parent thread creates task to be run when all of the worker tasks are finished. Parent thread creates a monitoring thread Monitoring thread may kill a worker task or spawn a new task depending on load. I can figure out everything up to step 4. How do I get the task from step 2 to wait to run until any new worker threads created in step 4 finish?

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • Unable to copy a file from obj\Debug to bin\Debug

    - by M.H
    I have a project in C# and I get this error every time I try to compile the project : (Unable to copy file "obj\Debug\Project1.exe" to "bin\Debug\Project1.exe". The process cannot access the file 'bin\Debug\Project1.exe' because it is being used by another process.), so I have to close the process from the task manager. my project is only one form and there is no multithreading. what is the solution (without restarting VS or Killing the process)?

    Read the article

  • FindBugs: "may fail to close stream" - is this valid in case of InputStream?

    - by thSoft
    In my Java code, I start a new process, then obtain its input stream to read it: BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); FindBugs reports an error here: may fail to close stream Pattern id: OS_OPEN_STREAM, type: OS, category: BAD_PRACTICE Must I close the InputStream of another process? And what's more, according to its Javadoc, InputStream#close() does nothing. So is this a false positive, or should I really close the input stream of the process when I'm done?

    Read the article

  • Splitting a test to a set of smaller tests

    - by mkorpela
    I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way. I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test. An example of what I am aiming at: //Class under test class A { public void setB(B b){ this.b = b; } public Output process(Input i){ return b.process(doMyProcessing(i)); } private InputFromA doMyProcessing(Input i){ .. } .. } //Another class under test class B { public Output process(InputFromA i){ .. } .. } //The Big Test @Test public void theBigTest(){ A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive Input i = createInput(); Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive assertEquals(o, expectedOutput()); } //The splitted tests @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest1(){ // this method is a bit too long but its just an example.. Input i = createInput(); InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B b = mock(B.class); when(b.process(x)).thenReturn(expected); A classUnderTest = createInstanceOfClassA(); classUnderTest.setB(b); Output o = classUnderTest.process(i); assertEquals(o, expected); verify(b).process(x); verifyNoMoreInteractions(b); } @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest2(){ InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B classUnderTest = createInstanceOfClassB(); Output o = classUnderTest.process(x); assertEquals(o, expected); }

    Read the article

  • Handling User Authentication in C#.NET?

    - by Daniel
    Hi! I am new to .NET, and don't have much experience in programming. What is the standard way of handling user authentication in .NET in the following situation? In Process A, User inputs ID/Password Process A sends the ID/Password to Process B over a nonsecure public channel. Process B authenticates the user with the recieved ID/Password what are some of the standard cryptographic algorithms I can use in above model? thank you for your time!

    Read the article

  • Handling User Authentication in .NET?

    - by Daniel
    I am new to .NET, and don't have much experience in programming. What is the standard way of handling user authentication in .NET in the following situation? In Process A, User inputs ID/Password Process A sends the ID/Password to Process B over a nonsecure public channel. Process B authenticates the user with the recieved ID/Password what are some of the standard cryptographic algorithms I can use in above model?

    Read the article

  • How can I programmatically tell if a caught IOException is because the file is being used by another

    - by Paul K
    When I open a file, I want to know if it is being used by another process so I can perform special handling; any other IOException I will bubble up. An IOException's Message property contains "The process cannot access the file 'foo' because it is being used by another process.", but this is unsuitable for programmatic detection. What is the safest, most robust way to detect a file being used by another process?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >