Search Results

Search found 832 results on 34 pages for 'plot'.

Page 22/34 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • R programming function without ()

    - by Mark Kennedy
    So, I Have the following very simple map.r file. I'm trying to have the user type "click" in interactive mode and then have the function . Since it's a function, the user has to type "click()" how can I make it so that they only have to the word (w/o parentheses), and then have that function do something with the img. So the user types: mydist("image.pnm") click //And then the function click does what it's supposed to mydist <- function(mapfile) { img <- read.pnm(mapfile) plot(img) } click <- function() { //Prompt user to click on img }

    Read the article

  • Oracle - Count distict values of a column

    - by Luciana Borela
    Hi I have this table: Reason|Area_Code|Id x dig 1 x dig 2 y dig 3 h util 4 x dig 5 I'm trying a sql that returns: Reason|Amount of distinct Reason|Area_code x 3 dig y 1 dig h 1 util I will use this result to plot a chart. I don´t have any ideia on how this SQL can be. Could you help me?

    Read the article

  • Looking for a specific python gui module to perform the following task

    - by Sadaf Amouz
    I am looking for a GUI python module that is best suited for the following job: I am trying to plot a graph with many columns (perhaps hundreds), each column representing an individual. The user should be able to drag the columns around and drop them onto different columns to switch the two. Also, there are going to be additional dots drawn on the columns and by hovering over those dots, the user should see the values corresponding to those dots. What is the best way to approach this?

    Read the article

  • Oracle - Count distinct values of a column

    - by Luciana Borela
    Hi I have this table: Reason|Area_Code|Id x dig 1 x dig 2 y dig 3 h util 4 x dig 5 I'm trying for a SQL that returns: Reason|Amount of distinct Reason|Area_code x 3 dig y 1 dig h 1 util I will use this result to plot a chart. I don´t have any idea on how this SQL can be done. Could you help me?

    Read the article

  • Is it possible to integrate Google Maps with the directions and the Transit layer into an iPhone app

    - by t4v
    Sorry if this question is obvious for some of you. I know we can link to the existing Goggle Maps app, but I would like to have an app that does not exit and provides the direction within. I intend using GTFS for public transit. On the other hand, would it be possible to plot a line inside the iPhone app the results as returned by Google Transit? (say, I send it the arrival and departure addresses) Thank you so much!

    Read the article

  • How to begin? Windows 8 Development

    - by Dennis Vroegop
    Ok. I convinced you in my last post to do some Win8 development. You want a piece of that cake, or whatever your reasons may be. Good! Welcome to the club! Now let me ask you a question: what are you going to write? Ah. That’s the big one, isn’t it? What indeed? If you have been creating applications for computers before you’re in for quite a shock. The way people perceive apps on a tablet is quite different from what we know as applications. There’s a reason we call them apps instead of applications! Yes, technically they are applications but we don’t call them apps only because it sounds cool. The abbreviated form of the word applications itself is a pointer. Apps are small. Apps are focused. Apps are more lightweight. Apps do one thing but they do that one thing extremely good. In the ‘old’ days we wrote huge systems. We build ecosystems of services, screens, databases and more to create a system that provides value for the user. Think about it: what application do you use most at work? Can you in one sentence describe what it is, or what it does and yet still distinctively describe its purpose? I doubt you can. Let’s have a look at Outlouk. We all know it and we all love or hate it. But what is it? A mail program? No, there’s so much more there: calendar, contacts, RSS feeds and so on. Some call it a ‘collaboration’  application but that’s not really true as well. After all, why should a collaboration application give me my schedule for the day? I think the best way to describe Outlook is “client for Exchange”  although that isn’t accurate either. Anyway: Outlook is a great application but it’s not an ‘app’ and therefor not very suitable for WinRT. Ok. Disclaimer here: yes, you can write big applications for WinRT. Some will. But that’s not what 99.9% of the developers will do. So I am stating here that big applications are not meant for WinRT. If 0.01% of the developers think that this is nonsense then they are welcome to go ahead but for the majority here this is not what we’re talking about. So: Apps are small, lightweight and good at what they do but only at that. If you’re a Phone developer you already know that: Phone apps on any platform fit the description I have above. If you’ve ever worked in a large cooperation before you might have seen one of these before: the Mission Statement. It’s supposed to be a oneliner that sums up what the company is supposed to do. Funny enough: although this doesn’t work for large companies it does work for defining your app. A mission statement for an app describes what it does. If it doesn’t fit in the mission statement then your app is going to get to big and will fail. A statement like this should be in the following style “<your app name> is the best app to <describe single task>” Fill in the blanks, write it and go! Mmm.. not really. There are some things there we need to think about. But the statement is a very, very important one. If you cannot fit your app in that line you’re preparing to fail. Your app will become to big, its purpose will be unclear and it will be hard to use. People won’t download it and those who do will give it a bad rating therefor preventing that huge success you’ve been dreaming about. Stick to the statement! Ok, let’s give it a try: “PlanesAreCool” is the best app to do planespotting in the field. You might have seen these people along runways of airports: taking photographs of airplanes and noting down their numbers and arrival- and departure times. We are going to help them out with our great app! If you look at the statement, can you guess what it does? I bet you can. If you find out it isn’t clear enough of if it’s too broad, refine it. This is probably the most important step in the development of your app so give it enough time! So. We’ve got the statement. Print it out, stick it to the wall and look at it. What does it tell you? If you see this, what do you think the app does? Write that down. Sit down with some friends and talk about it. What do they expect from an app like this? Write that down as well. Brainstorm. Make a list of features. This is mine: Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. Now, I must make it clear that I am not a planespotter nor do I know what one does. So if the above list makes no sense, I apologize. There is a lesson: write apps for stuff you know about…. First of all, let’s look at our statement and then go through the list of features. Remove everything that has nothing to do with that statement! If you end up with an empty list, try again with both steps. Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. That's better. The things I removed could be pretty useful to a plane spotter and could be fun to write. But do they match the statement? I said that the app is for spotting in the field, so “look up airfields” doesn’t belong there: I know where I am so why look it up? And the same goes for inviting friends or looking up the weather conditions for tomorrow. I am at the airfield right now, looking through my binoculars at the planes. I know the weather now and I don’t care about tomorrow. If you feel the items you’ve crossed out are valuable, then why not write another app? One that says “SpotNoter” is the best app for preparing a day of spotting with my friends. That’s a different app! Remember: Win8 apps are small and very good at doing ONE thing, and one thing only! If you have made that list, it’s time to prepare the navigation of your app. The navigation is how users see your app and how they use it. We’ll do that next time!

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • DC Comics Identifies Krypton on the Star Map

    - by Jason Fitzpatrick
    This week Action Comics Superman #14 hits the stands and DC comics reveals the actual location of Kyrpton, delivered by none other than beloved astrophysicist Neil Tyson. Phil Plait at Bad Astronomy reports on the resolution of fans’ long standing curiosity about the location of Krypton: Well, that’s about to change. DC comics is releasing a new book this week – Action Comics Superman #14 – that finally reveals the answer to this stellar question. And they picked a special guest to reveal it: my old friend Neil Tyson. Actually, Neil did more than just appear in the comic: he was approached by DC to find a good star to fit the story. Red supergiants don’t work; they explode as supernovae when they are too young to have an advanced civilization rise on any orbiting planets. Red giants aren’t a great fit either; they can be old, but none is at the right distance to match the storyline. It would have to be a red dwarf: there are lots of them, they can be very old, and some are close enough to fit the plot. I won’t keep you in suspense: the star is LHS 2520, a red dwarf in the southern constellation of Corvus (at the center of the picture here). It’s an M3.5 dwarf, meaning it has about a quarter of the Sun’s mass, a third its diameter, roughly half the Sun’s temperature, and a luminosity of a mere 1% of our Sun’s. It’s only 27 light years away – very close on the scale of the galaxy – but such a dim bulb you need a telescope to see it at all (for any astronomers out there, the coordinates are RA: 12h 10m 5.77s, Dec: -15° 4m 17.9 s). 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Is there a measure of code rot?

    - by DarenW
    I'm dealing, again, with a messy C++ application, tons of classes with confusing names, objects have pointers into each other and all over, longwinded Boost and STL data types, etc. (Pause and consider your favorite terror of messy legacy code. We probably have it.) The phrase "code rot" oft comes to mind when I work on this project. Is there a quantitative way to measure code rot? I wouldn't expect anything highly meaningful or scientific, since no other measure of code productivity or quality is so fine. I'm not looking for a mere opposite of measures of code quality, but specifically a measure of how many bad things happened after a series of maintenance software "engineers" have had turns hacking at the code. A general measure applying to any language, or many languages, would be great. If there's no such thing, at least for C++, which is a better than average language for creating messes. Maybe something involving a measure of topology of how objects connect during runtime, a count of chunks of commented out code, how mane files a typical variable's usage is scattered over, I don't know... but surely now, a decade into the 21st Century, someone has attempted to define some sort of rot measure. It would be especially interesting to automate a series of svn checkouts, measure the "rottenosity" of each, and plot the decay over time.

    Read the article

  • How to get average of multiple time series

    - by Supun Kamburugamuva
    I have few computers running. I want to get the average CPU usage of these computers and plot it as a graph. So I've collected CPU usage in regular intervals in these machines. So for each computer I have a data set of time and CPU usage. But the times at which CPU measurements are taken in different machines are not in sync. For example in 1st machine CPU may be measured in time 1, 5, 9. In the second machine CPU may be measured in time 2, 5, 8. I want to get an average data series from these different data sets. Could you point me to some resources? Thanks - Supun.

    Read the article

  • Fourier transform software

    - by CFP
    Hello everyone! After spending a lot of time searching for this, I thought that some SuperUser gurus might know the answer :) I'm searching for an open source application to compute an FFT, that could: * Import a list of points from a text file (in any format, I could write conversion scripts if needed), for example 0,1; 1,2; 4,5 * Compute the associated discrete transform, and output the list of coefficients Ideally, it would also display the plot and the associated fourier decomposition on the same graph, to allow comparison, but this is not absolutely needed. It can be either on Windows or on Linux/Unix. Can you think of a solution? Thanks, CFP.

    Read the article

  • Error with TextMate 2 --shell-escape and gnuplot 10.8.2

    - by Manuel
    I have had TM 1.X since a long time ago, but a week ago I updated my system to Mountain Lion 10.8.2 and installed TM2. The problem is that I write with LaTeX, and sometimes I use gnuplot for the graphs (I installed gnuplot with macports). But now it doesn't work because the --shell-escape doesn't work, this is the error message I get: Package pgfplots Error: Sorry, the gnuplot-result file '"untitled 2.pgf-plot.table"' could not be found. Maybe you need to enable the shell-escape feature? For pdflatex, this is ' pdflatex -shell-escape'. You can also invoke ' gnuplot .gnuplot' manually on the respective gnuplot file.. And then, looking around I discovered that it's not just gnuplot but everything which needs --shell-escape. Question What happened? How can I get TM have the correct rights so this works? It worked right in Snow Leopard with TM 1.5.

    Read the article

  • MacVim + tmux or: The Copy Paste Riddle

    - by Konzepz
    Now this is weird. Copied a chunk of a text from somewhere into clipboard. Ran mvim from a tmux buffer. Opened a file in MacVim. Pasted the text. MacVim results with the error: E353: Nothing to register + Nothing gets pasted. The plot thickens: I try to copy this error -- paste fails. I go over the same steps, this time running MacVim from a regular Terminal window (without tmux) -- Everything is in its right place. Whaa?

    Read the article

  • LM Sensors always returning same (invalid) value for one temp sensor

    - by pkaeding
    I am trying to monitor the temp sensors on a server, and plot them using Cacti. I have lm-sensors installed and working correctly. For example, here is the output from sensors: % sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8 C (crit = +100.0 C) temp2: +32.0 C (crit = +60.0 C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +36.0 C (high = +105.0 C, crit = +105.0 C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +42.0 C (high = +105.0 C, crit = +105.0 C) However, when I try to get this data via SNMP, I get only one sensor's temperature correctly, and another one always returns 100.000 C: % snmpwalk -Os -c public -v 1 10.8.0.18 -m ALL lmTempSensors lmTempSensorsIndex.1 = INTEGER: 0 lmTempSensorsIndex.2 = INTEGER: 1 lmTempSensorsDevice.1 = STRING: temp1 lmTempSensorsDevice.2 = STRING: temp1 lmTempSensorsValue.1 = Gauge32: 26800 lmTempSensorsValue.2 = Gauge32: 100000 So, my question is two-fold: Why is the second sensor that is returned by SNMP giving a value of 100 C (when it should be 32 C) Why are my CPU core sensors not being returned by SNMP?

    Read the article

  • data handling with javascript

    - by Vincent Warmerdam
    Python has a very neat package called pandas which allows for quick data transformation; tables, aggregation, that sort of thing. A lot of these types of functionality can also be found in the python itertools module. The plyR package in R is also very similar. Usually one woud use this functionality to produce a table which is later visualized with a plot. I am personally very fond of d3, and I would like to allow the user to first indicate what type of data aggregation he wants on the dataset before it is visualized. The visualisation in question involves making a heatmap where the user gets to select the size of the bins of the heatmap beforehand (I want d3 to project this through leaflet). I want to visually select the ideal size of the bins for the heatmap. The way I work now is that I take the dataset, aggregate it with python and then manually load it in d3. This is a process that takes a lot of human effort and I was wondering if the data aggregation can be done through the javascript of the browser. I couldn't find a package for javascript specifically built for data, suggesting (to me) that this is a bad idea and that one should not use javascript for the data handling. Is there a good module/package for javascript to handle data aggregation? Is it a good/bad idea to do the data aggregation in javascript (performance wise)?

    Read the article

  • SNMP - Value of CPU processor load not reflecting reality

    - by Ovesh
    Trying to plot CPU load on my server, with the following hardware: ProLiant DL360p Gen8 (same behavior on ProLiant DL360 G7). The machine is running VMWare ESXi5.1 To create a CPU spike I run dd if=/dev/zero of=/dev/null, and I know the CPU is overloaded, because I can see a correlating spike in the graphs displayed on vCenter. However, running this snmpwalk: snmpwalk -v 1 -c ******** 192.168.MY_IP 1.3.6.1.2.1.25.3.3.1.2 Shows the following results: iso.3.6.1.2.1.25.3.3.1.2.1 = INTEGER: 3 iso.3.6.1.2.1.25.3.3.1.2.2 = INTEGER: 2 iso.3.6.1.2.1.25.3.3.1.2.3 = INTEGER: 2 iso.3.6.1.2.1.25.3.3.1.2.4 = INTEGER: 3 Am I not looking into the right MIB? Should I be multiplying these by a constant? By the way, using HP Agentless Monitoring I was able to get some cpu stats, but not what I'm looking for, at least nothing I could find wading through these MIBs.

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • How can I use structured references to a column in an Excel macro?

    - by Eshwar
    Here's an example that will explain things: Sheets("Plot Data July").Select ActiveSheet.ListObjects("tPDJuly").Range.AutoFilter Field:=2 ActiveSheet.ListObjects("tPDJuly").Range.AutoFilter Field:=4 So as you can see above, Field:=2 is a relative reference to the second field in the table called "tPDJuly". So now if I add more columns, this number does not get updated. The field is actually called "Grade" in the table. So is there a way of coding this so that no matter which column it is in, "Grade" is always updated? I suppose one solution is that we add a line that find what is the column number for "Grade"?

    Read the article

  • Why is Ubuntu's clock getting slower or faster?

    - by ændrük
    Ubuntu's clock is off by about a half hour: Where do I even start troubleshooting this? It's allegedly being set "automatically from the Internet". How can I verify that "the Internet" knows what time it is? Details Ubuntu has had plenty of time to communicate with the Internet: $ date; uptime Fri May 18 05:56:00 PDT 2012 05:56:00 up 12 days, 10:48, 2 users, load average: 0.61, 0.96, 1.15 This time server I found via a web search does appear to know the correct time: $ date; ntpdate -q north-america.pool.ntp.org Fri May 18 05:56:09 PDT 2012 server 208.38.65.37, stratum 2, offset 1752.625337, delay 0.10558 server 46.166.138.172, stratum 2, offset 1752.648597, delay 0.10629 server 205.189.158.228, stratum 3, offset 1752.672466, delay 0.11829 18 May 05:56:18 ntpdate[29752]: step time server 208.38.65.37 offset 1752.625337 sec There aren't any reported errors related to NTP: $ grep -ic ntp /var/log/syslog 0 After rebooting, the time was automatically corrected and the following appeared in /var/log/syslog: May 18 17:58:12 aux ntpdate[1891]: step time server 91.189.94.4 offset 1838.497277 sec A log of the offset reported by ntpdate reveals that the clock is drifting by about 9 seconds every hour: $ while true; do ntpdate-debian -q | tail -n 1 >> 'drift.log'; sleep 16m; done ^C $ r -e ' attach(read.table("drift.log", header=FALSE)) clock <- as.POSIXct(paste(V1, V2, V3), format="%d %b %H:%M:%S") fit <- lm(V10~clock) png("drift.png") plot(clock, V10, xlab="Clock time", ylab="Time server offset (s)") abline(fit) mtext(sprintf("Drift rate: %.2f s/hr", fit$coefficients[[2]]*3600)) '

    Read the article

  • Very Cool &ndash; Miami 311 System for tracking citizen service requests (Windows Azure, Silverlight

    - by Jim Duffy
    Having grown up in South Florida this short, but very enlightening, video explaining how the City of Miami has implemented a 311 citizen service request system using Windows Azure, Silverlight and Bing Maps definitely caught my attention. Miami311 The Miami311 System is a Windows Azure/Silverlight-based solution which enables City of Miami citizens report and track issues reported to city management. The system uses Bing Maps to plot the location and relevant information about each issue reported. Citizens now have the ability to easily see the status of the issue without having to call the city office. What I found interesting were a couple of benefits that a metropolitan area such as Miami can take advantage of in Windows Azure cloud-based solution. For the city of Miami, both benefits center around the weather. Of course the threat of a hurricane is a real issue in South Florida and what better way to make sure your site stays up during a hurricane then to have the site hosted far away from the eye of the storm. Using a Windows Azure cloud-based architecture the City of Miami is able to host the application within the Microsoft data centers safely away from any hurricane passing through South Florida. The second benefit is the inherent scalability of a Windows Azure based solution. During a severe weather event like thunderstorms or even worse, a hurricane, downed trees and power lines are a commonly reported problem. Being able to quickly scale up the computing resources required to handle the spike in citizens reporting these types of problems on the site is a huge benefit. Once the weather event has passed and downed tree reports begin to subside they can quickly reverse the process and scale the system back down to pre-storm levels. It’s kind of day-to-day kind of stuff but very cool stuff nonetheless. Have a day. :-|

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >