Search Results

Search found 6545 results on 262 pages for 'pig head'.

Page 3/262 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Error with git: remote HEAD is ambiguous, may be one of the followin

    - by vfclists
    After branching and pushing to the remote, a git remote show origin gives the report HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch What does the imply? It is a critical error? remote origin Fetch URL: [email protected]:/home/gituser/repos/csfsconf.git Push URL: [email protected]:/home/gituser/repos/csfsconf.git HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch

    Read the article

  • How do I change a Git remote HEAD to point to something besides "master"

    - by jhs
    Short version: How do I set a Git remote's HEAD ref to point to something besides "master"? My project has a policy not to use a "master" branch (all branches are to have meaningful names). Furthermore, the canonical master repository is only accessible via ssh://, with no shell access (like GitHub or Unfuddle). My problem is that the remote repository still has a HEAD reference to refs/heads/master, but I need it to point to a different branch. This is causing two problems: When cloning the repo, there this, warning: remote HEAD refers to nonexistent ref, unable to checkout. That's confusing and inconvenient. The web-based code browser depends on HEAD as a basis for browsing the tree. I need HEAD to point to a valid branch, then.

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • Java 8 Stream, getting head and tail

    - by lyomi
    Java 8 introduced a Stream class that resembles Scala's Stream, a powerful lazy construct using which it is possible to do something like this very concisely: def from(n: Int): Stream[Int] = n #:: from(n+1) def sieve(s: Stream[Int]): Stream[Int] = { s.head #:: sieve(s.tail filter (_ % s.head != 0)) } val primes = sieve(from(2)) primes takeWhile(_ < 1000) print // prints all primes less than 1000 I wondered if it is possible to do this in Java 8, so I wrote something like this: IntStream from(int n) { return IntStream.iterate(n, m -> m + 1); } IntStream sieve(IntStream s) { int head = s.findFirst().getAsInt(); return IntStream.concat(IntStream.of(head), sieve(s.skip(1).filter(n -> n % head != 0))); } IntStream primes = sieve(from(2)); PrimitiveIterator.OfInt it = primes.iterator(); for (int prime = it.nextInt(); prime < 1000; prime = it.nextInt()) { System.out.println(prime); } Fairly simple, but it produces java.lang.IllegalStateException: stream has already been operated upon or closed because both findFirst() and skip() is a terminal operation on Stream which can be done only once. I don't really have to use up the stream twice since all I need is the first number in the stream and the rest as another stream, i.e. equivalent of Scala's Stream.head and Stream.tail. Is there a method in Java 8 Stream that I can achieve this? Thanks.

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Git command to display HEAD commit id?

    - by Andrew Arnott
    What command can I use to print out the commit id of HEAD? This is what I'm doing by hand: $ cat .git/HEAD ref: refs/heads/v3.3 $ cat .git/refs/heads/v3.3 6050732e725c68b83c35c873ff8808dff1c406e1 But I need a script that can reliably pipe the output of some command to a text file such that the text file contains exactly the commit id of HEAD (nothing more or less, and not just a ref). Can anyone help?

    Read the article

  • Retrieveing ALL head elements in document including iframe

    - by Vishal Shah
    I'm trying to add style elements to all ALL the head elements in a document, including those in an iframe. if i use var heads = document.getElementsByTagName('head'); It just returns the first head element and not the ones in the iframe. this is the complete code : var heads = document.getElementsByTagName("head"); var style = document.createElement("style"); style.type = "text/css"; style.appendChild(document.createTextNode(css)); for(var i=0;i<heads.length;i++) heads[i].appendChild(style); but this doesn't seem to work! am i doing something wrong here...?

    Read the article

  • Browser loading strategy, <head>...<body>

    - by Bin Chen
    I am inspecting some interesting behaviors of browser, I don't know it's in standard or not. If I put everythin inside <head></head>, the browser will only begin to render the page after all the resouces in head is retrieved. So I am thinking that put as little as possible things into head is one of the important website optimization techniques, is it right? My question is: If I put script/css in body or other parts of the html, how can I know that script has been loaded successfully so that I will not be calling a undefined function?

    Read the article

  • Asp.net Head Problem

    - by oraclee
    Hi all; i want to write head tag. i need create dynamic <head id="htmlHead" runat="server"> </head> code behind: htmlHead.InnerText = "<script src=\"Scripts/jquery.ui.datepicker.js\" type=\"text/javascript\"></script>" not work pls help

    Read the article

  • ASP.NET: Page HTML head rendering

    - by Fabian
    I've been trying to figure out the best way to custom render the <head> element of a pag to get rid of the extra line breaks which is caused by <head runat="server">, so its properly formatted. So far the only thing i've found which works is the following: protected override void Render(HtmlTextWriter writer) { StringWriter stringWriter = new StringWriter(); HtmlTextWriter htmlTextWriter = new HtmlTextWriter(stringWriter); base.Render(htmlTextWriter); htmlTextWriter.Close(); string html = stringWriter.ToString(); string newHTML = html.Replace("\r\n\r\n<!DOCTYPE", "<!DOCTYPE") .Replace("\r\n<html", "<html") .Replace("<title>\r\n\t", "<title>") .Replace("\r\n</title>", "</title>") .Replace("</head>", "\n</head>"); writer.Write(newHTML); } I define my had tag like Now i have 2 questions: How does the above code affect the performance (so is this viable in an production environment)? Is there a better way to do this, for example a method which i can override to just custom render the <head>? Oh yeah ASP.NET MVC is not an option.

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • Arguments passed on by shell to command in Unix

    - by Ryan Brown
    I've been going over this question and I can't for the life of me figure out why the answer is what it is. How many arguments are passed to the command by the shell on this command line:<pig pig -x " " -z -r" " >pig pig pig a. 8 b. 6 c. 5 d. 7 e. 9 The first symbol is supposed to be the symbol for redirected input but the site isn't letting me use it. [Fixed.] I looked at this question and said ok...arguments...not options so 2nd pig, then " ", then -r" ", 4th pig and 5th pig...-z and -x are options, so I count 5. The answer is b. 6. Where is the 6th argument that's being passed on?

    Read the article

  • Client-Side Dynamic Removal of <script> Tags in <head>

    - by merv
    Is it possible to remove script tags in the <head> of an HTML document client-side and prior to execution of those tags? On the server-side I am able to insert a <script> above all other <script> tags in the <head>, except one, and I would like to be able to remove all subsequent scripts. I do not have the ability to remove <script> tags from the server side. What I've tried: (function (c,h) { var i, s = h.getElementsByTagName('script'); c.log("Num scripts: " + s.length); i = s.length - 1; while(i > 1) { h.removeChild(s[i]); i -= 1; } })(console, document.head); However, the logged number of scripts comes out to only 1, since (as @ryan pointed out) the code is being executed prior to the DOM being ready. Although wrapping the code above in a document.ready event callback does enable proper calculation of the number of <script> tags in the <head>, waiting until the DOM is ready fails to prevent the scripts from loading. Is there a reliable means of manipulating the HTML prior to the DOM being ready? Background If you want more context, this is part of an attempt to consolidate scripts where no option for server-side aggregation is available. Many of the JS libraries being loaded are from a CMS with limited configuration options. The content is mostly static, so there is very little concern about manually aggregating the JavaScript and serving it from a different location. Any suggestions for alternative applicable aggregation techniques would also be welcome.

    Read the article

  • Searching the head of CVS

    - by bobtheowl2
    I'm looking for a 'relatively' easy way to search through cvs to look for a particular string in the HEAD revisions. I realize the way CVS stores versions makes this difficult. But I'm trying to come up with some script to allow this search (performance is not expected here). Currently this command will output the contents of the head files cvs co -r HEAD -p stdout = file contents (to be grepped for the search string) stderr = the file name/header info (to be grepped for the line that signifies file name). Ideally, I want to grep the contents and display the header + a few lines before and after the searched item (the output of this likely directed to some file). I found a way to grep the stdout and stderr using different values. And the resulting stdout/stderr displayed is in the right order. But any attempt to redirect it to a file messes up the order? { { cvs co -r HEAD -p myModule 4>&- | grep 'myString' 2>&4 4>&- } 4>&2 2>&1 >&3 3>&- | grep 'Check' >&2 3>&- } 3>&1 Question 1. Is there an easier way to do this all together? Question 2. If not, how do I get the output of the code above to append to a file in the same order as displayed on the console?

    Read the article

  • Problem with intel chipset 4 serie and centos dealing with dual head

    - by Antoine
    I've a fujitsu lifebook S7220, it's been a while since i try to configure it to use a dual head with centos 5.4 x86_64. Everytime I try, the xserver crash... I've got an intel chipset mobile 4 serie (GMA 4500MHD, if I recall good!) When I do an lspci -v i've got these : 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller]) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0, IRQ 177 Memory at f2000000 (64-bit, non-prefetchable) [size=4M] Memory at d0000000 (64-bit, prefetchable) [size=256M] I/O ports at 1800 [size=8] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 3 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0 Memory at f2400000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 My question is, anyone already got this problem and how did you fix it? Thank you for your answer!

    Read the article

  • CVS: command-line diff on a remote CVS server between two HEAD revs

    - by Gugussee
    My CVS-fu is not very strong anymore (after years of SVN'ing and now Mercurial'ing). I'm trying to do a diff between two revisions of the HEAD branch (everything is in the HEAD anyway). I received an IDE already set up to use a :pserver:myname@cvsserver:port/cvs/project CVS. I'm on Windows XP. I do not want to use the IDE (the goal here is to learn CVS a bit more). Apparently I cannot login using SSH to the CVS server. How can I run a remote CVS diff between two HEAD revs using the command line? P.S: I am new here, mod me up so I can comment etc. :)

    Read the article

  • Git remote has master but no HEAD

    - by dwynne
    I'm new to Git, so I suspect that I'm misunderstanding something here, but I'll ask anyway. Via TortoiseGit I do the following: Init a new Git repo locally Add a readme file to it and commit Add a new remote Push the new repo to the orgin (remote) If I then Browse Refs I see the following: heads/master remotes/origin/master What I find odd is that I don't see a HEAD on the remotes. If I delete my local repo and then clone it from the server (I just pushed to above) and then browse the refs I see: heads/master remotes/origin/HEAD remotes/origin/master So why don't I see a remote head after the initial push? NB. I've done the same via Git Bash command (ie. not Tortoise Git) and am seeing the same thing.

    Read the article

  • head detection from video

    - by Aman Kaushal
    I have to detect heads of people in crowd in real time.For that I detected edge from video using matlab but from edge detected video , how to identify heads that i am unable to do. I used edge detection of video because it is easy to find circle from edged video and detection of head would be easy can anyone help me or suggest me any method for head- detection in real time. I have used VGG head detector and viola jones algorithm but it is only detecting face for small size video not detecting heads for large crowd. Suggestions?

    Read the article

  • stop and split generated sequence at repeats - clojure

    - by fitzsnaggle
    I am trying to make a sequence that will only generate values until it finds the following conditions and return the listed results: case head = 0 - return {:origin [all generated except 0] :pattern 0} 1 - return {:origin nil :pattern [all-generated-values] } repeated-value - {:origin [values-before-repeat] :pattern [values-after-repeat] { ; n = int ; x = int ; hist - all generated values ; Keeps the head below x (defn trim-head [head x] (loop [head head] (if (> head x) (recur (- head x)) head))) ; Generates the next head (defn next-head [head x n] (trim-head (* head n) x)) (defn row [x n] (iterate #(next-head % x n) n)) ; Generates a whole row - ; Rows are a max of x - 1. (take (- x 1) (row 11 3)) Examples of cases to stop before reaching end of row: [9 8 4 5 6 7 4] - '4' is repeated so STOP. Return preceding as origin and rest as pattern. {:origin [9 8] :pattern [4 5 6 7]} [4 5 6 1] - found a '1' so STOP, so return everything as pattern {:origin nil :pattern [4 5 6 1]} [3 0] - found a '0' so STOP {:origin [3] :pattern [0]} :else if the sequences reaches a length of x - 1: {:origin [all values generated] :pattern nil} The Problem I have used partition-by with some success to split the groups at the point where a repeated value is found, but would like to do this lazily. Is there some way I can use take-while, or condp, or the :while clause of the for loop to make a condition that partitions when it finds repeats? Some Attempts (take 2 (partition-by #(= 1 %) (row 11 4))) (for [p (partition-by #(stop-match? %) head) (iterate #(next-head % x n) n) :while (or (not= (last p) (or 1 0 n) (nil? (rest p))] {:origin (first p) :pattern (concat (second p) (last p))})) # Updates What I really want to be able to do is find out if a value has repeated and partition the seq without using the index. Is that possible? Something like this - { (defn row [x n] (loop [hist [n] head (gen-next-head (first hist) x n) steps 1] (if (>= (- x 1) steps) (case head 0 {:origin [hist] :pattern [0]} 1 {:origin nil :pattern (conj hist head)} ; Speculative from here on out (let [p (partition-by #(apply distinct? %) (conj hist head))] (if-not (nil? (next p)) ; One partition if no repeats. {:origin (first p) :pattern (concat (second p) (nth 3 p))} (recur (conj hist head) (gen-next-head head x n) (inc steps))))) {:origin hist :pattern nil}))) }

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • HEAD XMLHttpRequest on Chromium

    - by Treviño
    I'm trying to get the HEAD response with an XMLHttpRequest in Chromium to retrive the location URL of a compressed url, but it fails: var ajax = new XMLHttpRequest(); ajax.onreadystatechange = function() { if (ajax.readyState == 4) alert(ajax.getResponseHeader("Location")) }; ajax.open('HEAD', "http://bit.ly/4Agih5", false); ajax.send(); // Refused to get unsafe header "Location" // Error: NETWORK_ERR: XMLHttpRequest Exception 101

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >