Search Results

Search found 77 results on 4 pages for 'bins'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • how to define fill colours in ggplot histogram?

    - by Andreas
    I have the following simple data data <- structure(list(status = c(9, 5, 9, 10, 11, 10, 8, 6, 6, 7, 10, 10, 7, 11, 11, 7, NA, 9, 11, 9, 10, 8, 9, 10, 7, 11, 9, 10, 9, 9, 8, 9, 11, 9, 11, 7, 8, 6, 11, 10, 9, 11, 11, 10, 11, 10, 9, 11, 7, 8, 8, 9, 4, 11, 11, 8, 7, 7, 11, 11, 11, 6, 7, 11, 6, 10, 10, 9, 10, 10, 8, 8, 10, 4, 8, 5, 8, 7), statusgruppe = c(0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, NA, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0)), .Names = c("status", "statusgruppe"), class = "data.frame", row.names = c(NA, -78L )) from that I'd like to make a histogram: ggplot(data, aes(status))+ geom_histogram(aes(y=..density..), binwidth=1, colour = "black", fill="white")+ theme_bw()+ scale_x_continuous("Staus", breaks=c(min(data$status,na.rm=T), median(data$status, na.rm=T), max(data$status, na.rm=T)),labels=c("Low", "Middle", "High"))+ scale_y_continuous("Percent", formatter="percent") Now - i'd like for the bins to take colou according to value - e.g. bins with value 9 gets dark grey - everything else should be light grey. I have tried with "fill=statusgruppe", scale_fill_grey(breaks=9) etc. - but I can't get it to work. Any ideas?

    Read the article

  • Optimizing a bin-placement algorithm

    - by user258651
    Alright, I've got two collections, and I need to place elements from collection1 into the bins (elements) of collection2, based on whether their value falls within a given bin's range. For a concrete example, assume I have a sorted collection objects (bins) which have an int range ([1...4], [5..10], etc). I need to determine the range an int falls in, and place it in the appropriate bin. foreach(element n in collection1) { foreach(bin m in collection2) { if (m.inRange(n)) { m.add(n); break; } } } So the obvious NxM complexity algorithm is there, but I really would like to see Nxlog(M). To do this I'd like to use BinarySearch in place of the inner foreach loop. To use BinarySearch, I need to implement an IComparer class to do the searching for me. The problem I'm running into is this approach would require me to make an IComparer.Compare function that compares two different types of objects (an element to its bin), and that doesn't seem possible or correct. So I'm asking, how should I write this algorithm?

    Read the article

  • Centering Divisions Around Zero

    - by Mark
    I'm trying to create something that sort of resembles a histogram. I'm trying to create buckets from an array. Suppose I have a random array doubles between -10 and 10; this is very simplified. I then want to specify a center point, in this case 0 and the number of buckets. If I want 4 buckets the division would be -10 to -5, -5 to 0, 0 to 5 and 5 to 10. Not that complicated right. Now if I change the min and max to -12 and -9 and as for 4 divisions its more complicated. I either want a division at -3 and 3; it is centered around 0 ; or one at -6 to 0 and 0 to 6. Its not that hard to find the division size = Math.Ceiling((Abs(Max) + Abs(Min)) / Divisions) Then you would basically have an if statement to determine whether you want it centered on 0 or on an edge. You then iterate out from either 0 or DivisionSize/2 depending on the situation. You may not ALWAYS end up with the specified number of divisions but it will be close. Then you iterate through the array and increment the bin count. Does this seem like a good way to go about this? This method would surely work but it does not seem to be the most elegant. I'm curious as to whether the creation of the bins and the counting from the list could be done in a clever class with linq in a more elegant way? Something like creating the bins and then having each bin be a property {get;} that returns list.Count(x=> x >= Lower && x < Upper).

    Read the article

  • How to draw histogram in Processing

    - by theolc
    I have an arrayList and I would like to take the size of the ten arrayList elements and use the sizes to create a histogram. I understand processing coordinate system starts in the top left corner. My question is, how do I use the arrayList.size() values to start at 450 (based from a 500,500 window) and go upwards from there to create my histogram. The following is my function for the histogram, it is receiving as a parameter the arrayList called bins. void histogram(ArrayList[] bins) { //set window background(0,0,0); size(500,500); background(255,255,255); line(50,0,50,500); line(0,450,500,450); int i; for (i = 50; i <= 500; i+=45) { line(i,450,i,480); } for (i = 50; i <= 450; i+=45) { line(50,i,20,i); } } Thanks in adavance for any help and input!

    Read the article

  • Effecient organization of spare cables and hardware

    - by Jake Wharton
    As many of you also likely do, I have a growing collection of cables, hardware, and spare parts (screws, connectors, etc.). I'm looking to find a good system of organization so that everything isn't a tangled mess, mismatched, and potentially able to be damaged. Since the the three things listed above are all have varying sizes and degrees of delicacy this poises an interesting problem. Presently I have those cheap plastic storage bins you find at Wal-mart for everything. Cables that were once wrapped neatly have become tangled due to numerous "I know I have a cable for this" moments. Hardware is mixed in other bins with odds and ends with no protection from each other. NICs, CPUs, and HDDs are all interacting and likely causing damage. Finally there are stray parts sprinkled amongst these two both in plastic bags and loose. I'm looking to unify this storage into a controlled chaos. Here are my thoughts: Odds and ends are the easiest. Screws, connectors, and small electronic parts lend themselves perfectly to tackle boxes and jewelry boxes. Since these are usually dynamically compartmentalized I can adjust for the contents and label them on the outside or inside of the lid. Cables are easily wrangled with short velcro strips but that doesn't stop them from being all mixed in together. Hardware is the worst offender. Size, shape, and degree of delicacy changes with nearly every piece. I'm willing to sacrifice a bit of organization for a somewhat efficient manner. What are all your thoughts? What is the best type of tackle or jewelry box to use? Most of them are cheap and flimsy. Is there a better alternative? How can I organize cables to know exactly (within reason) where one is? What about associating cables with hardware (Wall adapter to router, etc.)? What kind of storage unit lends itself to all shapes of hardware? Do I need to separate by size or degree of delicacy for better organization?

    Read the article

  • How do you change data from a qr code on a scanner [on hold]

    - by Malcolm Eaton
    I have a problem now with the QR bar codes on the Wheelie Bins we deliver. The scan was giving us the following .. RL0313550 Now due to some changes at the manufacturing plant they have had to add more data as follows. 1234567891,RL031550 We only need the "RL031550" can anyone let me know how to fix this. We use Intermec CN50 device with 2d Imager fitted, was hoping to fix this within the device settings.

    Read the article

  • /dev/sda1 100% Mysql to blame?

    - by SJP
    I have a an API running that receives raw binaries, processes them, and then stores metadata about the bins in a mysql database. I have been running it for a couple days on a VM. Today the API stopped processing the mySQL commands. After using the command df-h the results were: root@mwdb1:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 104G 99G 0 100% / udev 16G 4.0K 16G 1% /dev tmpfs 6.3G 364K 6.3G 1% /run none 5.0M 0 5.0M 0% /run/lock none 16G 0 16G 0% /run/shm /dev/sdb1 5.5T 42G 5.2T 1% /data sda1 is at 100%

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Using pg_connect() with wamp server and postgresql

    - by northlandiguana
    Help! I am trying to connect to a Postgres database and can't get the server to connect. When I execute this php script: $conn = pg_connect("dbname=wikimap user=postgres password=postgis host=localhost port=54321"; if (!$conn) { echo "Not connected : " . pg_error(); exit; } I get this error: <b>Fatal error</b>: Call to undefined function pg_connect() in <b>C:\wamp\www\wikimap\php\pgis.php</b> on line <b>33</b><br /> I have made sure the php_pgsql and php_pdo_pgsql extensions are enabled in the wamp menu and php.ini, and I've read through other topics in this forum and others about connecting wamp to postgres, messing with the httpd.config file and php.ini file and copying libpq.dll between bins, all to no avail. I've been working on this for hours and can't figure out how to get pg_connect to work. Any ideas???

    Read the article

  • Keeping my zsh or bash profile synced up on all my machines.

    - by Joseph Silvashy
    I work on several different machines, all of which are *nix. I have a lot of specific things I like my shell to, or the prompt to look like, or aliases, etc, etc. I'm sure all of you folks deal with this as well. What do you think the best way to keep all my machines' shells to act the same? First off, I'm aware that different machines will need different paths to bins and other differences, so my first inclination is to just include a file at the end of my profile, this is the one that we'll keep in sync. What is the best way to keep files synced up? I can put the file on a remote system, and perhaps use git, to push, then pull my changes every once and a while. However, isn't Rsync better suited for this?

    Read the article

  • Running remotely an app from a shared folder with PsExec

    - by Stephane
    I am actually not sure that this is possible. let's see: I have a script that runs on a Build server. Let's name this server A. It drops the bins to a shared folder on server B. And I want to run the program on server C. So using caspol I can allow the executable to be ran remotely. that means from B I can run \C\shared\my.exe What I want to do is from A run \C\shared\my.exe on B. SysInternals\PsExec.exe -u username -p password -accepteula \\ServerC -i 0 -d -w \\ServerB\Nightly\Server \\ServerB\Nightly\Server\server.exe The user has all the necessary rights. But, the -w (working directory) options apparently wants a path relative to the server I point to. Any idea?

    Read the article

  • Running remotely an app from a shared folder with PsExec

    - by Stephane
    I am actually not sure that this is possible. let's see: I have a script that runs on a Build server. Let's name this server A. It drops the bins to a shared folder on server B. And I want to run the program on server C. So using caspol I can allow the executable to be ran remotely. that means from B I can run \C\shared\my.exe What I want to do is from A run \C\shared\my.exe on B. SysInternals\PsExec.exe -u username -p password -accepteula \\ServerC -i 0 -d -w \\ServerB\Nightly\Server \\ServerB\Nightly\Server\server.exe The user has all the necessary rights. But, the -w (working directory) options apparently wants a path relative to the server I point to. Any idea?

    Read the article

  • Should I format USB sticks and SD cards to FAT, FAT32, exFAT or NTFS? (Windows files, live Linux distors)

    - by superuser
    Does it depend on the media size which one to chose or on some other parameters? In Windows 7 FAT16 is the default. In pendrivelinux.com's Universal USB Installer FAT32. Which one to chose? How about NTFS for Windows use? How about exFAT? It is tne Microsoft designed filesystem for removable media. Is there a difference in USB sticks and SD cards in this regard? Edit: seeing developments in the other thread, should I still use something like exFAT if I don't want Recycle bins created on every single machine I plug my USB thumb drive in?

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • How can I make multiple displays work on my Asus UX32VD?

    - by oKtosiTe
    Original title: Why do I have two trash icons in the Unity Launcher? Whether I run Ubuntu as a live-USB or install it, I always have two trash bins on the Unity Launcher. Both work, and both open the same location. This seems a bit redundant; what could be done about it? Update: Turning auto-hide on made it obvious that I have multiple Launchers showing. With auto-hide off, they simply overlap, making it look like there's a double trash icon, but with auto-hide enabled, I can display one Launcher (and therefore one trash icon) at a time. Still, two are running simultaneously. Second update: This problem appears to be caused by the way Ubuntu handles multiple displays on my Asus UX32VD Ultrabook. Somehow, the laptop display cannot be used while my external display is connected. It is shown in the Displays list, but remains black no matter how I configure it. The external display runs at 1920x1200, the laptop monitor should run at 1920x1080. It therefore becomes obvious that the Launcher that's supposed to run on the laptop display, is actually displayed on the external monitor. Using nomodeset as a kernel parameter as indicated here makes the laptop display inaccessible altogether, detecting the external monitor as the laptop display and making resolutions other than 1920x1200 inaccessible. That is not an option.

    Read the article

  • Logarithmic spacing of FFT subbands

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

  • Dynamically building and updating Histograms with JFreeChart

    - by job
    I've got a stream of incoming data that I would like to plot using a simple histogram. I don't know the range of values, or the proper resolution or bin width to use for the histogram. SimpleHistogramDataset provides some of this functionality, but I don't want to have to deal with catching exceptions in order to add new bins if the new value isn't covered. In addition, it doesn't easily allow me to rebuild the histogram using a different bin width (perhaps an integer multiples of some initial set width). Is there an easy way to accomplish this with JFreeChart or some alternate charting library, or am I going to have to write my own class here?

    Read the article

  • Python virtualenv conflicting

    - by Fernando
    I'm trying to learn Django, so I started by reading about virtualenv. After installing it with pip (, I end up with: ... sudo pip install virtualenv) ... virtualenv paths virtualenv at /usr/local/bin/virtualenv and virtualenv-2.7 at /usr/local/bin/virtualenv-2.7 If I use virtualenv-2.7 it seems to work fine, but if I use virtualenv, new modules get added to /usr/local/bin, instead of being inside the environment. Example cd ~ virtualenv v1 source v1/bin/activate easy_install yolk which yolk # /usr/local/bin If I use virtualenv-2-7, yolk gets installed correctly inside v1. Did I mess up the installation? How can I fix this? (maybe uninstall virtualenv and start over). Thanks for any help! Edit: I figured i have two easy_install bins /usr/bin/easy_install-2.7 and /usr/bin/easy_install easy_install --version distribute 0.6.24dev-r0 easy_install-2.7 --version distribute 0.6.24dev-r0 so this may be the cause of problems. More info: python version: 2.7.3 virtualenv version: 1.10.1

    Read the article

  • How can I convert convert docx or wordml xml files to xsl-fo?

    - by Jon Pastore
    I've been looking for a method to convert docx or wordml xml to xsl-fo. I read this post: http://stackoverflow.com/questions/156683/what-is-the-best-xslt-engine-for-perl but I'm having exceptional problems getting apache-fop going. I was able to download the bins and run it locally but the formatting was a little off and it didn't maintain the headers and footers or section 1 or section 3 (17 page doc 3 sections) it also overlapped the text over the outline numbers and did not maintain the font used. trying a more simple test caused fop to fail completely. I would like to find a way to create a PDF that is at least close to 100% accurate reproduction of the original doc.

    Read the article

  • NHibernate Unique Constraint on Name and Parent Object fails because NH inserts Null

    - by James
    Hi, I have an object as follows Public Class Bin Public Property Id As Integer Public Property Name As String Public Property Store As Store End Class Public Class Store Public Property Id As Integer Public Property Bins As IEnumerable(Of Bin) End Class I have a unique constraint in the database on Bin.Name and BinStoreID to ensure unique names within stores. However, when NHibernate persists the store, it first inserts the Bin records with a null StoreID before performing an update later to set the correct StoreID. This violates the Unique Key If I persist two stores with a Bin of the same name because The Name columns are the same and the StoreID is null for both. Is there something I can add to the mapping to ensure that the correct StoreID is included in the INSERT rather than performing an update later? We are using HiLo identity generation so we are not relying on DB generated identity columns Thanks James

    Read the article

  • What's the big difference between those two binary files?

    - by Lela Dax
    These are two files (contained in the tar.bz2) that were generated using a just-in-time compiler for a game engine. The generated code from ui-linux.bin is from a x86_64 gcc compiler and the ui-windows.bin from the same brand of compiler but targetting win x86_64 (mingw-w64). I've attempted to debug a problem that occurs only on the windows version and i stumbled upon what it seems to be different end-binary code. However, the input assembly code was virtually identical (only difference being pointer representations as int). (there's theoretically no winabi/unixabi conflict since that's taken care of by an attribute flag on certain declarations involved). Any idea what it might be that makes these two binary codes different? The C for the mini-compiler and base assembly producing it appears compatible at first glance. http://www0.org/vm/bins.tar.bz2

    Read the article

  • how to substract numbers from levels

    - by romunov
    Dear SOFers, I would like to cut a vector of values ranging 0-70 to x number of categories, and would like the upper limit of each category. So far, I have tried this using cut() and am trying to extract the limits from levels. I have a list of levels, from which I would like to extract the second number from each level. How can I extract the values between space and ] (which is the number I'm interested in)? I have: > levels(bins) [1] "(-0.07,6.94]" "(6.94,14]" "(14,21]" "(21,28]" "(28,35]" [6] "(35,42]" "(42,49]" "(49,56]" "(56,63.1]" "(63.1,70.1]" and would like to get: [1] 6.94 14 21 28 35 42 49 56 63.1 70.1 Or is there a better way of calculating the upper bounds of categories?

    Read the article

  • Binning into timeslots - Is there a better way than using list comp?

    - by flyingcrab
    I have a dataset of events (tweets to be specific) that I am trying to bin / discretize. The following code seems to work fine so far (assuming 100 bins): HOUR = timedelta(hours=1) start = datetime.datetime(2009,01,01) z = [dt + x*HOUR for x in xrange(1, 100)] But then, I came across this fateful line at python docs 'This makes possible an idiom for clustering a data series into n-length groups using zip(*[iter(s)]*n)'. The zip idiom does indeed work - but I can't understand how (what is the * operator for instance?). How could I use to make my code prettier? I'm guessing this means I should make a generator / iterable for time that yields the time in graduations of an HOUR?

    Read the article

  • Algorithm design: can you provide a solution to the multiple knapsack problem?

    - by MalcomTucker
    I am looking for a pseudo-code solution to what is effectively the Multiple Knapsack Problem (optimisation statement is halfway down the page). I think this problem is NP Complete so the solution doesn't need to be optimal, rather if it is fairly efficient and easily implemented that would be good. The problem is this: I have many work items, with each taking a different (but fixed and known) amount of time to complete. I need to divide these work items into groups so as to have the smallest number of groups (ideally), with each group of work items taking no longer than a given total threshold - say 1 hour. I am flexible about the threshold - it doesnt need to be rigidly applied, though should be close. My idea was to allocate work items into bins where each bin represents 90% of the threshold, 80%, 70% and so on. I could then match items that take 90% to those that take 10%, and so on. Any better ideas?

    Read the article

< Previous Page | 1 2 3 4  | Next Page >