Search Results

Search found 8299 results on 332 pages for 'job hierarchy'.

Page 47/332 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Interfaces with hibernate annotations

    - by molleman
    Hello i am wondering how i would be able to annotate an interface @Entity @Table(name = "FOLDER_TABLE") public class Folder implements Serializable, Hierarchy { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "folder_id", updatable = false, nullable = false) private int fId; @Column(name = "folder_name") private String folderName; @OneToMany(cascade = CascadeType.ALL) @JoinTable(name = "FOLDER_JOIN_FILE_INFORMATION_TABLE", joinColumns = { @JoinColumn(name = "folder_id") }, inverseJoinColumns = { @JoinColumn(name = "file_information_id") }) private List< Hierarchy > fileInformation = new ArrayList< Hierarchy >(); above and below are 2 classes that implement an interface called Hierarchy, the folder class has a list of Hierarchyies being a folder or a fileinformation class @Entity @Table(name = "FILE_INFORMATION_TABLE") public class FileInformation implements Serializable, Hierarchy { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "file_information_id", updatable = false, nullable = false) private int ieId; @Column (name = "location") private String location; i have serached the web for someway to annotate or a workaround but i cannot map the interface which is simply this public interface Hierarchy { } i get a mapping exeception on the List of hierarchyies with a folder but i dont know how to map the class correctly

    Read the article

  • Adding GestureOverlayView to my SurfaceView class, how to add to view hierarchy?

    - by Codejoy
    I was informed in a later answer that I have to add the GestureOverlayView I create in code to my view hierarchy, and I am not 100% how to do that. Below is the original question for completeness. I want my game to be able to recognize gestures. I have this nice SurfaceView class that I do an onDraw to draw my sprites, and I have a thread thats running it to call the onDraw etc . This all works great. I am trying to add the GestureOverlayView to this and it just isn't working. Finally hacked to where it doesn't crash but this is what i have public class Panel extends SurfaceView implements SurfaceHolder.Callback, OnGesturePerformedListener { public Panel(Context context) { theContext=context; mLibrary = GestureLibraries.fromRawResource(context, R.raw.myspells); GestureOverlayView gestures = new GestureOverlayView(theContext); gestures.setOrientation(gestures.ORIENTATION_VERTICAL); gestures.setEventsInterceptionEnabled(true); gestures.setGestureStrokeType(gestures.GESTURE_STROKE_TYPE_MULTIPLE); gestures.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); //GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); } ... ... onDraw... surfaceCreated(..); ... ... public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the spell Toast.makeText(theContext, prediction.name, Toast.LENGTH_SHORT).show(); } } } } The onGesturePerformed is never called. Their example has the GestureOverlay in the xml, I am not using that, my activity is simple: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); Panel p = new Panel(this); setContentView(p); } So I am at a bit of a loss of the missing piece of information here, it doesn't call the onGesturePerformed and the nice pretty yellow "you are drawing a gesture" never shows up.

    Read the article

  • SQL Server 2008 vs 2005 udf xml perfomance problem.

    - by user344495
    Ok we have a simple udf that takes a XML integer list and returns a table: CREATE FUNCTION [dbo].[udfParseXmlListOfInt] ( @ItemListXml XML (dbo.xsdListOfInteger) ) RETURNS TABLE AS RETURN ( --- parses the XML and returns it as an int table --- SELECT ListItems.ID.value('.','INT') AS KeyValue FROM @ItemListXml.nodes('//list/item') AS ListItems(ID) ) In a stored procedure we create a temp table using this UDF INSERT INTO @JobTable (JobNumber, JobSchedID, JobBatID, StoreID, CustID, CustDivID, BatchStartDate, BatchEndDate, UnavailableFrom) SELECT JOB.JobNumber, JOB.JobSchedID, ISNULL(JOB.JobBatID,0), STO.StoreID, STO.CustID, ISNULL(STO.CustDivID,0), AVL.StartDate, AVL.EndDate, ISNULL(AVL.StartDate, DATEADD(day, -8, GETDATE())) FROM dbo.udfParseXmlListOfInt(@JobNumberList) TMP INNER JOIN dbo.JobSchedule JOB ON (JOB.JobNumber = TMP.KeyValue) INNER JOIN dbo.Store STO ON (STO.StoreID = JOB.StoreID) INNER JOIN dbo.JobSchedEvent EVT ON (EVT.JobSchedID = JOB.JobSchedID AND EVT.IsPrimary = 1) LEFT OUTER JOIN dbo.Availability AVL ON (AVL.AvailTypID = 5 AND AVL.RowID = JOB.JobBatID) ORDER BY JOB.JobSchedID; For a simple list of 10 JobNumbers in SQL2005 this returns in less than 1 second, in 2008 this run against the exact same data returns in 7 min. This is on a much faster machine with more memory. Any ideas?

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • Does Ubuntu Server have any sort of cron job to automatically clear /tmp?

    - by DWilliams
    I know it clears out /tmp on reboots, but I haven't been able to find any sort of cron job on my server that clears /tmp. I recently set up a script that writes lots of files to /tmp and my server usually goes several months between reboots so I'm concerned about it being cluttered. I've seen several other distros that have a tmpwatch script installed by default. Ubuntu's repository seems to have replaced tmpwatch with tmpreaper. Is there any mechanism in place on Ubuntu (8.04 currently, soon to be upgraded to 10.04 when I get around to it) to clean up temp files on a server that doesn't regularly reboot or do I need to install tmpreaper?

    Read the article

  • How to run "mongodb --repair" if it's an Upstart job?

    - by Wolfram Arnold
    My MongoDB server died. The log says something about an unclean shutdown and an existing mongodb.lock file. It recommends to remove the lock file, then restart the mongodb server with --repair. However, on my system (Ubuntu 10.10), I've installed MongoDB via an apt-get package, and it's set up as Upstart job. If I run mongodb from the command line, it won't find the data, none of the paths are set correctly. Surely, I could read the man page, try to emulate what Upstart would do, give it all the correct parameters plus --repair but that seems like a lot of trouble. There must be a simpler way, that's not fighting Upstart. What is it?

    Read the article

  • cron job executing every minute but should be setup to execute every 4 hours.

    - by Frank V
    Note: I've viewed cron: can’t lock /var/run/crond.pid, otherpid may be 3759 but I believe my question is different (but with the same resulting problem.) I'm very new to cron. I setup a script to run a python script every minute to test that everything was working. I did use crontab to accomplish this. It worked great, so I wanted to switch it to run every 4 hour. I changed my * * * * * {...} to * */4 * * * {...} but the job is continues to run every minute. It's been like this for the last hour or so. When I attempt to run cron restart (thinking that would solve the problem), I receive the following error message: cron: can't lock /var/run/crond.pid, otherpid may be 2311: Resource temporarily unavailable Is my cron syntax wrong? And why might I not be able to restart cron?

    Read the article

  • How can I run a job when the server load is low?

    - by jberryman
    I have a command that runs a disk snapshot (on EC2, freezing an XFS disk and running an EBS snapshot command), which is set to run on a regular schedule as a cron job. Ideally I would like to be able to have the command delayed for a period of time if the disk is being used heavily at the moment the task is scheduled to run. I'm afraid that using nice/ionice might not have the proper effect, as I would like the script to run with high priority while it is running (i.e. wait for a good time, then finish fast). Thanks.

    Read the article

  • How to reliably run a batch job every 5 seconds?

    - by Benjamin
    I'm building an application where the sending of all notifications (email, SMS, fax) will be asynchronous. The application will write the notifications to the database, and a batch job will read these notifications and send them with the appropriate transport. I was first reading at ways to run cron faster than the minute, and realized this was a bad idea. The batch scripts are written in PHP, and I guess that writing a proper daemon would be quite an overhead (though I'm open to any suggestion, as PHP car run indefinitely as well). What I have in mind is a solution that would: Run the PHP script every 5 seconds Check that the previous run has finished, or abort (never 2 concurrent batches running) Kill the script if live for more than x minutes (a security in case it hangs) Start with the system (if a reboot occurs) Any idea how to do this?

    Read the article

  • Rsync to take the newest file. And a cron job?

    - by user1704877
    I have a log file on two different servers. The servers are under a load balancer so half the traffic goes to one server, and half the traffic goes to the other server. I need to take the newest log file from one machine and transfer that log file to the other machine. So if one log file is changed on one server, it gets updated on the other server. I think I need to use rsync. And do I also need to put it in a cron job?

    Read the article

  • Strange results - I obtain same value for all keys

    - by Pietro Luciani
    I have a problem with mapreduce. Giving as input a list of song ("Songname"#"UserID"#"boolean") i must have as result a song list in which is specified how many time different useres listen them... so a output ("Songname","timelistening"). I used hashtable to allow only one couple . With short files it works well but when I put as input a list about 1000000 of records it returns me the same value (20) for all records. This is my mapper: public static class CanzoniMapper extends Mapper<Object, Text, Text, IntWritable>{ private IntWritable userID = new IntWritable(0); private Text song = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { /*StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); }*/ String[] caratteri = value.toString().split("#"); if(caratteri[2].equals("1")){ song.set(caratteri[0]); userID.set(Integer.parseInt(caratteri[1])); context.write(song,userID); } } } This is my reducer: public static class CanzoniReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { Hashtable<IntWritable,Text> doppioni = new Hashtable<IntWritable,Text>(); for (IntWritable val : values) { doppioni.put(val,key); } result.set(doppioni.size()); //doppioni.clear(); context.write(key,result); } } and main: Configuration conf = new Configuration(); Job job = new Job(conf, "word count"); job.setJarByClass(Canzoni.class); job.setMapperClass(CanzoniMapper.class); //job.setCombinerClass(CanzoniReducer.class); //job.setNumReduceTasks(2); job.setReducerClass(CanzoniReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); Any idea???

    Read the article

  • CakePHP: How can I change this find call to include all records that do not exist in the associated

    - by Stephen
    I have a few tables with the following relationships: Company hasMany Jobs, Employees, and Trucks, Users I've got all my foreign keys set up properly, along with the tables' Models, Controllers, and Views. Originally, the Jobs table had a boolean field called "assigned". The following find operation (from the JobsController) successfully returns all employees, all trucks, and any jobs that are not assigned and fall on a certain day for a single company (without returning users by utilizing the containable behavior): $this->set('resources', $this->Job->Company->find('first', array( 'conditions' => array( 'Company.id' => $company_id ), 'contain' => array( 'Employee', 'Truck', 'Job' => array( 'conditions' => array( 'Job.assigned' => false, 'Job.pickup_date' => date('Y-m-d', strtotime('Today')); ) ) ) ))); Now, since writing this code, I decided to do a lot more with the job assignments. So I've created a new model "Assignment" that belongsTo Truck and belongsTo Job. I've added the hasMany Assignments to both the Truck model and the Jobs Model. I have both foreign keys in the assignments table, along with some other assignment fields. Now, I'm trying to get the same information above, only instead of checking the assigned field from the job table, I want to check the assignments table to ensure that the job does not exist there. I can no longer use the containable behavior if I'm going to use the "joins" feature of the find method due to mysql errors (according to the cookbook). But, the following query returns all jobs, even if they fall on different days. $this->set('resources', $this->Job->Company->find('first', array( 'joins' => array( array( 'table' => 'employees', 'alias' => 'Employee', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Employee.company_id' ) ), array( 'table' => 'trucks', 'alias' => 'Truck', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Truck.company_id' ) ), array( 'table' => 'jobs', 'alias' => 'Job', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Job.company_id' ) ), array( 'table' => 'assignments', 'alias' => 'Assignment', 'type' => 'LEFT', 'conditions' => array( 'Job.id = Assignment.job_id' ) ) ), 'conditions' => array( 'Job.pickup_date' => $day, 'Company.id' => $company_id, 'Assignment.job_id IS NULL' ) )));

    Read the article

  • what factors should a fresher(for programmer job) consider and learn before saying yes to employer f

    - by Senthil
    what factors should a fresher(for programmer job) consider and learn before saying yes to employer for job offer? and to contract? and most importantly how should one get the details?how can I approach them? I know some employers dont want to give such details..right? I have shortlisted by a Software COmpany..that is parter with microsoft. and works on technology like VB ADO.DOTNET,and some other reporting stuffs.,sql servers etc.,Tell me about scope of that..because They are asking me to sign for 2 year certificate bond agreement..I want to be a great programmer and Project Leader after 5 years..advise me guys..Language/OS not problem for me,As I curious to learn more things. Most of the SO members are programmers..so yours advice is greatly appreciated

    Read the article

  • How do I control output files name and content of an Hadoop streaming job?

    - by Eran Kampf
    Is there a way to control the output filenames of an Hadoop Streaming job? Specifically I would like my job's output files content and name to be organized by the ket the reducer outputs - each file would only contain values for one key and its name would be the key. Update: Just found the answer - Using a Java class that derives from MultipleOutputFormat as the jobs output format allows control of the output file names. http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.htmlhttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html I havent seen any samples for this out there... Can anyone point out to an Hadoop Streaming sample that makes use of a custom output format Java class?

    Read the article

  • Hudson, is it possible to make a plugin configuration non-visible depending on job type?

    - by Haju
    With the plugin (SCM plugin) I'm working on the problem is that it doesn't work in any other job/project type than Freestyle-project. I'd like to hide the plugin configuration from project configuration page on other job/project types (maven, matrix etc), because it seems to distract people. I wonder if there's a "right" way of doing this, or any way at all? Currently the project type is checked in checkout-method as a first thing, and if it doesn't match, the build is failed instantly, but this is not completely satisfactory solution, since it causes a bit more work to the end user.

    Read the article

  • Waiting for a submitted job to finish in Oracle PL/SQL?

    - by vicjugador
    I'm looking for the equivalent of Java's thread.join() in PL/SQL. I.e. I want to kick off a number of jobs (threads), and then wait for them to finish. How is this possible in PL/SQL? I'm thinking of using dbms_job.submit (I know it's deprecated). dbms_scheduler is also an alternative. My code: DECLARE jobno1 number; jobno2 number; BEGIN dbms_job.submit(jobno1,'begin dbms_lock.sleep(10); dbms_output.put_line(''job 1 exit'');end;'); dbms_job.submit(jobno2,'begin dbms_lock.sleep(10); dbms_output.put_line(''job 2 exit'');end;'); dbms_job.run(jobno1); dbms_job.run(jobno2); //Need code to Wait for jobno1 to finish //Need code to Wait for jobno2 to finish END;

    Read the article

  • Need help on a problemset in a programming contest

    - by topher
    I've attended a local programming contest on my country. The name of the contest is "ACM-ICPC Indonesia National Contest 2013". The contest has ended on 2013-10-13 15:00:00 (GMT +7) and I am still curious about one of the problems. You can find the original version of the problem here. Brief Problem Explanation: There are a set of "jobs" (tasks) that should be performed on several "servers" (computers). Each job should be executed strictly from start time Si to end time Ei Each server can only perform one task at a time. (The complicated thing goes here) It takes some time for a server to switch from one job to another. If a server finishes job Jx, then to start job Jy it will need an intermission time Tx,y after job Jx completes. This is the time required by the server to clean up job Jx and load job Jy. In other word, job Jy can be run after job Jx if and only if Ex + Tx,y = Sy. The problem is to compute the minimum number of servers needed to do all jobs. Example: For example, let there be 3 jobs S(1) = 3 and E(1) = 6 S(2) = 10 and E(2) = 15 S(3) = 16 and E(3) = 20 T(1,2) = 2, T(1,3) = 5 T(2,1) = 0, T(2,3) = 3 T(3,1) = 0, T(3,2) = 0 In this example, we need 2 servers: Server 1: J(1), J(2) Server 2: J(3) Sample Input: Short explanation: The first 3 is the number of test cases, following by number of jobs (the second 3 means that there are 3 jobs for case 1), then followed by Ei and Si, then the T matrix (sized equal with number of jobs). 3 3 3 6 10 15 16 20 0 2 5 0 0 3 0 0 0 4 8 10 4 7 12 15 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 10 4 7 12 15 1 4 0 50 50 50 50 0 50 50 50 50 0 50 50 50 50 0 Sample Output: Case #1: 2 Case #2: 1 Case #3: 4 Personal Comments: The time required can be represented as a graph matrix, so I'm supposing this as a directed acyclic graph problem. Methods I tried so far is brute force and greedy, but got Wrong Answer. (Unfortunately I don't have my code anymore) Could probably solved by dynamic programming too, but I'm not sure. I really have no clear idea on how to solve this problem. So a simple hint or insight will be very helpful to me.

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • mysql: job failded to start. mysqld.sock is missing

    - by Freefri
    How can I fix this and start mysql-server? After /etc/init.d/mysql start or service mysql start I get the message start: "Job failed to start" And after # mysqld I get this: mysqld 121123 11:33:33 [ERROR] Can't find messagefile '/usr/share/mysql/errmsg.sys' 121123 11:33:33 [Note] Plugin 'FEDERATED' is disabled. mysqld: Unknown error 1146 121123 11:33:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121123 11:33:33 InnoDB: The InnoDB memory heap is disabled 121123 11:33:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121123 11:33:33 InnoDB: Compressed tables use zlib 1.2.3.4 121123 11:33:33 InnoDB: Initializing buffer pool, size = 128.0M 121123 11:33:33 InnoDB: Completed initialization of buffer pool 121123 11:33:33 InnoDB: highest supported file format is Barracuda. 121123 11:33:33 InnoDB: Waiting for the background threads to start 121123 11:33:34 InnoDB: 1.1.8 started; log sequence number 1595675 121123 11:33:34 [ERROR] Aborting 121123 11:33:34 InnoDB: Starting shutdown... 121123 11:33:35 InnoDB: Shutdown completed; log sequence number 1595675 121123 11:33:35 [Note] I try to do what mysql say me to do: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And yes, /var/run/mysql is empty: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And this is my file /etc/mysql/my.cnf # cat /etc/mysql/my.cnf |grep sock # Remember to edit /etc/mysql/debian.cnf when changing the socket location. socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock Then I try to reinstall mysql from cero: apt-get purge mysql-client mysql-common mysql-server rm -R /var/lib/mysql rm -R /etc/mysql rm -R /var/run/mysqld userdel mysql apt-get install mysql-server mysql-client Then, after typing my root password for mysql I get this error: | Unable to set password for the MySQL "root" user ¦ ¦ ¦ ¦ An error occurred while setting the password for the MySQL administrative ¦ ¦ user. This may have happened because the account already has a password, or ¦ ¦ because of a communication problem with the MySQL server. ¦ ¦ ¦ ¦ You should check the account's password after the package installation. ¦ ¦ ¦ ¦ Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more ¦ ¦ information. And again I can't start mysql getting the same messages.

    Read the article

  • Reading Data from DDFS ValueError: No JSON object could be decoded

    - by secumind
    I'm running dozens of map reduce jobs for a number of different purposes using disco. My data has grown enormous and I thought I would try using DDFS for a change rather than standard txt files. I've followed the DISCO map/reduce example Counting Words as a map/reduce job, without to much difficulty and with the help of others, Reading JSON specific data into DISCO I've gotten past one of my latest problems. I'm trying to read data in/out of ddfs to better chunk and distribute it but am having a bit of trouble. Here's an example file: file.txt {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a>"} I load it into DDFS with: # ddfs chunk data:test1 ./file.txt created: disco://localhost/ddfs/vol0/blob/44/file_txt-0$549-db27b-125e1 I test that the file is indeed loaded into ddfs with: # ddfs xcat data:test1 {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a> At this point everything is great, I load up the script that resulted from a previous Stack Post: from disco.core import Job, result_iterator import gzip def map(line, params): import unicodedata import json r = json.loads(line).get('text') s = unicodedata.normalize('NFD', r).encode('ascii', 'ignore') for word in s.split(): yield word, 1 def reduce(iter, params): from disco.util import kvgroup for word, counts in kvgroup(sorted(iter)): yield word, sum(counts) if __name__ == '__main__': job = Job().run(input=["tag://data:test1"], map=map, reduce=reduce) for word, count in result_iterator(job.wait(show=True)): print word, count NOTE: That this script runs file if the input=["file.txt"], however when I run it with "tag://data:test1" I get the following error: # DISCO_EVENTS=1 python count_normal_words.py Job@549:db30e:25bd8: Status: [map] 0 waiting, 1 running, 0 done, 0 failed 2012/11/25 21:43:26 master New job initialized! 2012/11/25 21:43:26 master Starting job 2012/11/25 21:43:26 master Starting map phase 2012/11/25 21:43:26 master map:0 assigned to solice 2012/11/25 21:43:26 master ERROR: Job failed: Worker at 'solice' died: Traceback (most recent call last): File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 329, in main job.worker.start(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 290, in start self.run(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 286, in run getattr(self, task.mode)(task, params) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 299, in map for key, val in self['map'](entry, params): File "count_normal_words.py", line 12, in map File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2012/11/25 21:43:26 master WARN: Job killed Status: [map] 1 waiting, 0 running, 0 done, 1 failed Traceback (most recent call last): File "count_normal_words.py", line 28, in <module> for word, count in result_iterator(job.wait(show=True)): File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 348, in wait timeout, poll_interval * 1000) File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 309, in check_results raise JobError(Job(name=jobname, master=self), "Status %s" % status) disco.error.JobError: Job Job@549:db30e:25bd8 failed: Status dead The Error states: ValueError: No JSON object could be decoded. Again, this works fine using the text file as input but now DDFS. Any ideas, I'm open to suggestions?

    Read the article

  • How important is the programming language when you choose a new job?

    - by Luhmann
    We are currently hiring at the company where I work, and here the codebase is in VB.Net. We are worried that we miss out on a lot of brilliant programmers, who would never ever consider working with VB.Net. My own background is Java and C#, and I was somewhat sceptical as to whether it would work out with VB, as - to be honest - i didn't care much for VB. After a month or so, I was completely fluent in VB, and a few months later i discovered to my surprise, that I actually like VB. I still code my free time projects in C# and Boo though. So my question is firstly, how important is language for you, when you choose a new programming job? Lets say if its a great company, salary is good, and generally an attractive work-place. Would you say no to the perfect job, if the language wasn't your preferred dialect? VB or C# is one thing, but how about Java or C# etc. Secondly if the best developers won't join your company because of your language or platform, would you consider changing, to get the right people? (This is not a language bashing thread, so please no religious language wars) NB: This is Community Wiki

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >