Search Results

Search found 7263 results on 291 pages for 'job boards'.

Page 244/291 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Script to add user to MediaWiki

    - by Marquis Wang
    I'm trying to write a script that will create a user in MediaWiki, so that I can run a batch job to import a series of users. I'm using mediawiki-1.12.0. I got this code from a forum, but it doesn't look like it works with 1.12 (it's for 1.13) $name = 'Username'; #Username (MUST start with a capital letter) $pass = 'password'; #Password (plaintext, will be hashed later down) $email = 'email'; #Email (automatically gets confirmed after the creation process) $path = "/path/to/mediawiki"; putenv( "MW_INSTALL_PATH={$path}" ); require_once( "{$path}/includes/WebStart.php" ); $pass = User::crypt( $pass ); $user = User::createNew( $name, array( 'password' => $pass, 'email' => $email ) ); $user->confirmEmail(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); Thanks!

    Read the article

  • Hiring my first employee

    - by Ady
    A few years ago I moved to a new job having been programming for 2 years using C#, however this new company was mainly using VB6. I made the case for .NET and won, but one of the consessions I had to make was to use VB.NET and not C# (understandable as most of the other developers were already using VB). Three years later it was time to move on, but when applying for jobs I couldn't get past the recruitment agents. I realised that when they were looking at the basic requirements (5 years experience) that they could not add 2 and 3 together to make 5. They were looking for 5 years in VB or C# not across both. Frustrated I decided to combine my skills with a designer friend and start my own company. After two years of hard graft we are now looking for our first employee (a programmer), and this question has hit me again, but now I see the employers perspective. Why take the risk of someone getting up to speed when you have thousands of applicants to choose from. So my question is this, if I define the requirements to be too narrow, I could miss the really great candidates. But if they are too broad it's going to take ages to go through them all. This will be our first 'employee' so the choice needs to be good, I can't afford to make a mistake and employ someone naff. Another option would be to choose a bright university graduate, and train them up (less of a risk because we can pay them less). What have others done in this situation, and what would you recommend I do?

    Read the article

  • acts_as_xapian jobs table

    - by Grnbeagle
    Hi, Can someone explain to me the inner workings of acts_as_xapian_jobs table? I ran into an issue with the acts_as_xapian plugin recently, where I kept getting the following error when it creates an object with xapian indexed fields: Mysql::Error: Duplicate entry 'String-2147483647' for key 2: INSERT INTO `acts_as_xapian_jobs` (`action`, `model`, `model_id`) VALUES ('update', 'String', 23730251831560) It turns out the model_id exceeded the max int value of 2147483647. The workaround was to update model_id to use bigint. Why would the model_id be so huge? By looking at content of acts_as_xapian_jobs, it seems it creates a row for every field that is being indexed.. Understanding how a job gets created in the table would help a great deal. Here's a sampling of the table: mysql> select * from acts_as_xapian_jobs limit 5\G *************************** 1. row *************************** id: 19 model: String model_id: 23804037900560 action: update *************************** 2. row *************************** id: 49 model: String model_id: 23804037191200 action: update *************************** 3. row *************************** id: 79 model: String model_id: 23804037932180 action: update *************************** 4. row *************************** id: 109 model: String model_id: 23804037101700 action: update *************************** 5. row *************************** id: 139 model: String model_id: 23804037722160 action: update Thanks in advance, Amie

    Read the article

  • Getting my foot in the SCADA door, how?

    - by bibby
    I keep hearing that I should learn SCADA and its PLC language to upgrade my career. While I enjoy currently being a web and mobile development privateer, the prospects of working for a municipality or industrial entity has its appeals (since I am trying to grow a family). Over the years, I've tought myself to skillfully use php, javascript, java, perl, awk, bash. Surely, these language skills can tranfer somewhat to SCADA's logic controller language. Without any formal training in CS (music major!) other than at the workplace, I wouldn't have been able to pick up those languages and run with them had it not been for their open documentation and free-to-install or already-installed interpreters/compilers. I can't see that this is true with SCADA, and I'm hoping that I'm wrong. Ideally, I'd like to be able to apply for a job that requires [A,B,C] and suggest that they hire me because I already know [A & B]; that they wouldn't have to do a ground-up training with someone that's never programmed before. So, finally, the question; How do I "learn" SCADA? Are there sites and docs? What's going to help me get my foot in the door? Any insight is appreciated. Thanks!

    Read the article

  • R: outlier cleaning for each column in a dataframe by using quantiles 0.05 and 0.95

    - by Rainer
    hi, I am a R-novice. I want to do some outlier cleaning and over-all-scaling from 0 to 1 before putting the sample into a random forest. g<-c(1000,60,50,60,50,40,50,60,70,60,40,70,50,60,50,70,10) If i do a simple scaling from 0 - 1 the result would be: > round((g - min(g))/abs(max(g) - min(g)),1) [1] 1.0 0.1 0.0 0.1 0.0 0.0 0.0 0.1 0.1 0.1 0.0 0.1 0.0 0.1 0.0 0.1 0.0 So my idea is to replace the values of each column that are greater than the 0.95-quantile with the next value smaller than the 0.95-quantile - and the same for the 0.05-quantile. So the pre-scaled result would be: g<-c(**70**,60,50,60,50,40,50,60,70,60,40,70,50,60,50,70,**40**) and scaled: > round((g - min(g))/abs(max(g) - min(g)),1) [1] 1.0 0.7 0.3 0.7 0.3 0.0 0.3 0.7 1.0 0.7 0.0 1.0 0.3 0.7 0.3 1.0 0.0 I need this formula for a whole dataframe, so the functional implementation within R should be something like: > apply(c, 2, function(x) x[x`<quantile(x, 0.95)]`<-max(x[x, ... max without the quantile(x, 0.95)) Can anyone help? Spoken beside: if there exists a function that does this job directly, please let me know. I already checked out cut and cut2. cut fails because of not-unique breaks; cut2 would work, but only gives back string values or the mean value, and I need a numeric vector from 0 - 1. for trial: a<-c(100,6,5,6,5,4,5,6,7,6,4,7,5,6,5,7,1) b<-c(1000,60,50,60,50,40,50,60,70,60,40,70,50,60,50,70,10) c<-cbind(a,b) c<-as.data.frame(c) Regards and thanks for help, Rainer

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • Automatic testing of GUI related private methods

    - by Stein G. Strindhaug
    When it comes to GUI programming (at least for web) I feel that often the only thing that would be useful to unit test is some of the private methods*. While unit testing makes perfect sense for back-end code, I feel it doesn't quite fit the GUI classes. What is the best way to add automatic testing of these? * Why I think the only methods useful to test is private: Often when I write GUI classes they don't even have any public methods except for the constructor. The public methods if any is trivial, and the constructor does most of the job calling private methods. They receive some data from server does a lot of trivial output and feeds data to the constructor of other classes contained inside it, adding listeners that calls a (more or less directly) calls the server... Most of it pretty trivial (the hardest part is the layout: css, IE, etc.) but sometimes I create some private method that does some advanced tricks, which I definitely do not want to be publicly visible (because it's closely coupled to the implementation of the layout, and likely to change), but is sufficiently complicated to break. These are often only called by the constructor or repeatedly by events in the code, not by any public methods at all. I'd like to have a way to test this type of methods, without making it public or resorting to reflection trickery. (BTW: I'm currently using GWT, but I feel this applies to most languages/frameworks I've used when coding for GUI)

    Read the article

  • java: can't use constructors in abstract class

    - by ufk
    Hi. I created the following abstract class for job scheduler in red5: package com.demogames.jobs; import com.demogames.demofacebook.MysqlDb; import org.red5.server.api.IClient; import org.red5.server.api.IConnection; import org.red5.server.api.IScope; import org.red5.server.api.scheduling.IScheduledJob; import org.red5.server.api.so.ISharedObject; import org.apache.log4j.Logger; import org.red5.server.api.Red5; /** * * @author ufk */ abstract public class DemoJob implements IScheduledJob { protected IConnection conn; protected IClient client; protected ISharedObject so; protected IScope scope; protected MysqlDb mysqldb; protected static org.apache.log4j.Logger log = Logger .getLogger(DemoJob.class); protected DemoJob (ISharedObject so, MysqlDb mysqldb){ this.conn=Red5.getConnectionLocal(); this.client = conn.getClient(); this.so=so; this.mysqldb=mysqldb; this.scope=conn.getScope(); } protected DemoJob(ISharedObject so) { this.conn=Red5.getConnectionLocal(); this.client=this.conn.getClient(); this.so=so; this.scope=conn.getScope(); } protected DemoJob() { this.conn=Red5.getConnectionLocal(); this.client=this.conn.getClient(); this.scope=conn.getScope(); } } Then i created a simple class that extends the previous one: public class StartChallengeJob extends DemoJob { public void execute(ISchedulingService service) { log.error("test"); } } The problem is that my main application can only see the constructor without any parameters. with means i can do new StartChallengeJob() why doesn't the main application sees all the constructors ? thanks!

    Read the article

  • C# webservice async callback not getting called on HTTP 407 error.

    - by Ben
    Hi, I am trying to test the use-case of a customer having a proxy with login credentials, trying to use our webservice from our client. If the request is synchronous, my job is easy. Catch the WebException, check for the 407 code, and prompt the user for the login credentials. However, for async requests, I seem to be running into a problem: the callback is never getting called! I ran a wireshark trace and did indeed see that the HTTP 407 error was being passed back, so I am bewildered as to what to do. Here is the code that sets up the callback and starts the request: TravelService.TravelServiceImplService svc = new TravelService.TravelServiceImplService(); svc.Url = svcUrl; svc.CreateEventCompleted += CbkCreateEventCompleted; svc.CreateEventAsync(crReq, req); And the code that was generated when I consumed the WSDL: public void CreateEventAsync(TravelServiceCreateEventRequest CreateEventRequest, object userState) { if ((this.CreateEventOperationCompleted == null)) { this.CreateEventOperationCompleted = new System.Threading.SendOrPostCallback(this.OnCreateEventOperationCompleted); } this.InvokeAsync("CreateEvent", new object[] { CreateEventRequest}, this.CreateEventOperationCompleted, userState); } private void OnCreateEventOperationCompleted(object arg) { if ((this.CreateEventCompleted != null)) { System.Web.Services.Protocols.InvokeCompletedEventArgs invokeArgs = ((System.Web.Services.Protocols.InvokeCompletedEventArgs)(arg)); this.CreateEventCompleted(this, new CreateEventCompletedEventArgs(invokeArgs.Results, invokeArgs.Error, invokeArgs.Cancelled, invokeArgs.UserState)); } } Debugging the WS code, I found that even the SoapHttpClientProtocol.InvokeAsync method was not calling its callback as well. Am I missing some sort of configuration?

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • Passing Binary Data to a Stored Procedure in SQL Server 2008

    - by Joe Majewski
    I'm trying to figure out a way to store files in a database. I know it's recommended to store files on the file system rather than the database, but the job I'm working on would highly prefer using the database to store these images (files). There are also some constraints. I'm not an admin user, and I have to make stored procedures to execute all the commands. This hasn't been of much difficulty so far, but I cannot for the life of me establish a way to store a file (image) in the database. When I try to use the BULK command, I get an error saying "You do not have permission to use the bulk load statement." The bulk utility seemed like the easy way to upload files to the database, but without permissions I have to figure a work-a-round. I decided to use an HTML form with a file upload input type and handle it with PHP. The PHP calls the stored procedure and passes in the contents of the file. The problem is that now it's saying that the max length of a parameter can only be 128 characters. Now I'm completely stuck. I don't have permissions to use the bulk command and it appears that the max length of a parameter that I can pass to the SP is 128 characters. I expected to run into problems because binary characters and ascii characters don't mix well together, but I'm at a dead end... Thanks

    Read the article

  • Using JavaCC to infer semantics from a Composite tree

    - by Skice
    Hi all, I am programming (in Java) a very limited symbolic calculus library that manages polynomials, exponentials and expolinomials (sums of elements like "x^n * e^(c x)"). I want the library to be extensible in the sense of new analytic forms (trigonometric, etc.) or new kinds of operations (logarithm, domain transformations, etc.), so a Composite pattern that represent the syntactic structure of an expression, together with a bunch of Visitors for the operations, does the job quite well. My problem arise when I try to implement operations that depends on the semantics more than on the syntax of the Expression (like integrals, for instance: there are a lot of resolution methods for specific classes of functions, but these same classes can be represented with more than a single syntax). So I thought I need something to "parse" the Composite tree to infer its semantics in order to invoke the right integration method (if any). Someone pointed me to JavaCC, but all the examples I've seen deal only with string parsing; so, I don't know if I'm digging in the right direction. Some suggestions? (I hope to have been clear enough!)

    Read the article

  • Python and a "time value of money" problem.

    - by jamieb
    (I asked this question earlier today, but I did a poor job of explaining myself. Let me try again) I have a client who is an industrial maintenance company. They sell service agreements that are prepaid 20 hour blocks of a technician's time. Some of their larger customers might burn through that agreement in two weeks while customers with fewer problems might go eight months on that same contract. I would like to use Python to help model projected sales revenue and determine how many billable hours per month that they'll be on the hook for. If each customer only ever bought a single service contract (never renewed) it would be easy to figure sales as monthly_revenue = contract_value * qty_contracts_sold. Billable hours would also be easy: billable_hrs = hrs_per_contract * qty_contracts_sold. However, how do I account for renewals? Assuming that 90% (or some other arbitrary amount) of customers renew, then their monthly revenue ought to grow geometrically. Another important variable is how long the average customer burns through a contract. How do I determine what the revenue and billable hours will be 3, 6, or 12 months from now, based on various renewal and burn rates? I assume that I'd use some type of recursive function but math was never one of my strong points. Any suggestions please? Edit: I'm thinking that the best way to approach this is to think of it as a "time value of money" problem. I've retitled the question as such. The problem is probably a lot more common if you think of "monthly sales" as something similar to annuity payments.

    Read the article

  • Data Flow Object Graph and An Execution Engine for it

    - by M Dotnet
    I would like to build a custom workflow engine from scratch. The input to this workflow is a data flow diagram which is composed of a series of activities connected together through lines where each each represent the data flow. Each activity can export multiple outputs. Activities are complex math functions but the logic is hidden from the user. My workflow engine job is to execute the given data flow diagram. Each activity within the data flow diagram is a custom activity and each activity can output different outputs. How do you suggest to model the data flow diagram object? I need to be able to construct the data flow diagram problematically (no need for drag and drop) but I need to display the final result graphically (for display and debugging purposes). Are there any libraries out there that I could use? Should I keep the workflow presentable as an xml? I know that there are many projects out there trying to essentially doing similar thing by building such workflow engines but I need something light weight and open source. I do not need any state machine execution engine and mine is primarily sequential workflow with fork and join capabilities. My activities are wrappable as basic C# classes and I do NOT want to use anything as heavy as .NET workflow foundation.

    Read the article

  • quartz.net - Can I not add a callback delegate method to JobExecutionContext?

    - by Greg
    Hi, BACKGROUND - I have a synchroisation function within my MainForm class. It gets called manually when the user pushes the SYNC button. I want to also call this synchroisation function when the scheduler triggers too, so effectively want SchedulerJob:IJob.Execute() method to be able to call it. QUESTION - How do I access the MainForm.Sychronization() method from within the SchedulerJob:IJob.Execute() method? I tried creating a delegate for this method in the MainForm class and getting it added via jobDetail.JobDataMap. However when I try I'm not sure that JobDataMap has a method to pull out a Delegate type??? private void Schedule(MainForm.SyncDelegate _syncNow) { var jobDetail = new JobDetail("MainJob", null, typeof(SchedulerJob)); jobDetail.JobDataMap["CallbackMethod"] = _syncNow; // Trigger Setup var trigger = new CronTrigger("MainTrigger"); string expression = GetCronExpression(); trigger.CronExpressionString = expression; trigger.StartTimeUtc = DateTime.Now.ToUniversalTime(); // Schedule Job & Trigger _scheduler.ScheduleJob(jobDetail, trigger); } public class SchedulerJob : IJob { public SchedulerJob() { } public void Execute(JobExecutionContext context) { JobDataMap dataMap = context.JobDetail.JobDataMap; MainForm.SyncDelegate CallbackFunction = dataMap.getDelegate["CallbackMethod"]; **// THIS METHOD DOESN'T EXIST - getDelegate()** CallbackFunction(); } } thanks

    Read the article

  • Hadoop reduce task gets hung

    - by user806098
    I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes. The job tracker log of master shows messages like this: 2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476' And the name node log of master shows messages like this: 2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call register(DatanodeRegistration(202.106.199.39:50010, storageID=DS-1989397900-202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown datanode 202.106.199.3 9:50010 However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned. But I checked my config, it goes like this: /etc/hosts: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.225.16 master 192.168.225.66 slave1 192.168.225.20 slave5 192.168.225.17 slave17 conf/core-site.xml: hadoop.tmp.dir /root/hadoop_tmp/hadoop_${user.name} fs.default.name hdfs://master:54310 io.sort.mb 1024 hdfs-site.xml: dfs.replication 3 masters: master slaves: master slave1 slave5 slave17 Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.

    Read the article

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • How can I diagnose "Cannot determine peer address" in my Perl TCP script?

    - by MadBoy
    I've this little script which does it's job pretty well but sometimes it tends to fail. It fails in 2 cases: with error send: Cannot determine peer address at ./tcp-new.pl line 52 with no output or anything, it just fails to deliver what it got to connected Tcp Client. Usually it happens after I disconnect from server, go home and connect it again. To fix this restart is required and it starts working. Sometimes this problem is followed by problem mentioned in point 1. Note: it's not problem when I disconnect and reconnect to it again within short amount of time (unless error nr 1 happens). So can anyone help me make this code be a bit more stable so I don't have to restart it every day? #!/usr/bin/perl use strict; use warnings; use IO::Socket; use IO::Select; my $tcp_port = "10008"; my $udp_port = "2099"; my $tcp_socket = IO::Socket::INET->new( Listen => SOMAXCONN, LocalPort => $tcp_port, Proto => 'tcp', ReuseAddr => 1, ); my $udp_socket = IO::Socket::INET->new( LocalPort => $udp_port, Proto => 'udp', ); my $read_select = IO::Select->new(); my $write_select = IO::Select->new(); $read_select->add($tcp_socket); $read_select->add($udp_socket); while (1) { my @read = $read_select->can_read(); foreach my $read (@read) { if ($read == $tcp_socket) { my $new_tcp = $read->accept(); $write_select->add($new_tcp); } elsif ($read == $udp_socket) { my $recv_buffer; $udp_socket->recv($recv_buffer, 1024, undef); my @write = $write_select->can_write(); foreach my $write (@write) { $write->send($recv_buffer); } } } }

    Read the article

  • Java: Initializing a public static field in superclass that needs a different value in every subclas

    - by BinaryMuse
    Good evening, I am developing a set of Java classes so that a container class Box contains a List of a contained class Widget. A Widget needs to be able to specify relationships with other Widgets. I figured a good way to do this would be to do something like this: public abstract class Widget { public static class WidgetID { // implementation stolen from Google's GWT private static int nextHashCode; private final int index; public WidgetID() { index = ++nextHashCode; } public final int hashCode() { return index; } } public abstract WidgetID getWidgetID(); } so sublcasses of Widget could: public class BlueWidget extends Widget { public static final WidgetID WIDGETID = new WidgetID(); @Override public WidgetID getWidgetID() { return WIDGETID; } } Now, BlueWidget can do getBox().addWidgetRelationship(RelationshipTypes.SomeType, RedWidget.WIDGETID, and Box can iterate through it's list comparing the second parameter to iter.next().getWidgetID(). Now, all this works great so far. What I'm trying to do is keep from having to declare the public static final WidgetID WIDGETID in all the subclasses and implement it instead in the parent Widget class. The problem is, if I move that line of code into Widget, then every instance of a subclass of Widget appears to get the same static final WidgetID for their Subclassname.WIDGETID. However, making it non-static means I can no longer even call Subclassname.WIDGETID. So: how do I create a static WidgetID in the parent Widget class while ensuring it is different for every instance of Widget and subclasses of Widget? Or am I using the wrong tool for the job here? Thanks!

    Read the article

  • How to motivate as a project leader?

    - by Zenzen
    Ok so some of you might think this is not stackoverflow related, but since I got a similar question during an interview (for a J2EE dev position) I think you guys will help me in the end. The situation is simple: you're working on a project in a small group (4-5 people), you're the project leader and the guy who's the most competent (technology wise) is also slacking the most. What do you do to motivate him? The problem is, I'm having the same issue at the university - this semester we have a lot of projects going on (6 to be exact) so we decided to dived the work so everyone will do 1-2 projects in a technology he knows/wants to learn. It's been working well for the most part, the problem is the person whom we thought would finish his project way before us as he's the most experienced among us. How am I supposed to make him do his share properly? Till now he was just half assing his part by doing the bare minimum, with which the professor wasn't really pleased to say the least... Now it's only a university project, but in the future I might have the same problem in a real job and losing a really smart and experienced worker (as firing him is the only solution I can come up with now) would really be a waste. Aren't there any better ways? p.s. now that I think about it, are there any books that would help me in becoming a better project manager?

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Asp.net renders string with wrong encoding, but PHP doesn't (MySQL)

    - by citronas
    I took over some old php application with MySQL as database. Inside the database, there are tables including content with localized strings (therefore containing special chars) Currently there is a PHP application accessing that database. My job is to create an ASP.net (C# codebehind) application that accesses that strings as well. That works, as far as encoding goes. If I try to access these strings, I do get a kind of encoding problem, like 'Ändern' and 'Prüfzeichen', but only in the ASP.net application. The PHP app sets utf-8 as charset and the strings are perfectly rendered. In the ASP.net application it's gibberish, regardless of the page encoding. In the MySQL database, the charset for the specified table 'translations' is set to 'latin --cp1252 West European' and collation to 'latin_swedish_ci'. I can't seem to figure out what PHP apparently does, and ASP.net does not. I traced the php code and could not find any sign of special encoding while getting a string from the database. The question is, how can I ensure correct encoding inside the ASP.net application without modifying the database, because big changes at the php code are not possible? Does anybody have a clue?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >