Search Results

Search found 239 results on 10 pages for 'plotting'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Statistics Question: Kernel Smoothing in R

    - by James Thompson
    I have data of this form: x y 1 0.19 2 0.26 3 0.40 4 0.58 5 0.59 6 1.24 7 0.68 8 0.60 9 1.12 10 0.80 11 1.20 12 1.17 13 0.39 I'm currently plotting a kernel-smoothed density estimate of the x versus y using this code: smoothed = ksmooth( d$resi, d$score, bandwidth = 6 ) plot( smoothed ) I simply want a plot of the x versus smoothed(y) values, which is ## Heading ## However, the documentation for ksmooth suggests that this isn't the best kernel-smoothing package available: This function is implemented purely for compatibility with S, although it is nowhere near as slow as the S function. Better kernel smoothers are available in other packages. What other kernel smoothers are better can these smoothers be found?

    Read the article

  • Implementing a 'many-to-many' database

    - by Raven Dreamer
    Greetings, stack*overflow* In my database, I already have one table, 'contacts' that contains records of individual people. I also have several other tables in my database which represent "skill sets" that contain records denoting a particular skill. 1) Am I correct in plotting this as a "many-to-many" relationship? (each contact can have multiple skill sets, and each skill set can belong to multiple contacts) 2) I'm new to databases -- do I want to link the tables? 3) Is there a way to implement this in my program (C# + windows forms) such that for any given record in the 'contacts' table, either the names of all associated 'skill set' tables or all the 'skill' records associated with the 'contact' record could be retrieved? (Database is located on SQL Server Express 2008)

    Read the article

  • Determine outer boundries of polygon from lat/lng point array

    - by DustinDavis
    I have a large array of lat/lng points. Could be up to 20k points. I'm plotting them using KML. What I want to do is to take only the outter most points and use them to draw a polygon instead. I already know how to draw a polygon in kml, I just need to figure out how to select only the outer most points of the group. Any ideas? I'd like to have at least 5 points to the polygon but no more than 25 points total. So far i've come up with checking for top most and bottom most points (basically crearing a square) using < & logic. The points will be in america & canada only if that matters. Thanks for any help.

    Read the article

  • misleading plot issue in mathematica

    - by Qiang Li
    I want to study some "strange" functions by plotting them out in mathematica. One example is the following: mod2[x_] := Which[Mod[x, 2] >= 1, -2 + Mod[x, 2], True, Mod[x, 2]]; f[x_] := Which[-1 <= x <= 1, Abs[x], True, Abs[mod2[x]]]; fn[x_, n_] := Sum[(3/4)^i*f[4^n*x], {i, 0, n}] Plot[{fn[x, 0], fn[x, 1], fn[x, 2], fn[x, 5]}, {x, -2, 2}] However, the plot I got from mma is misleading, in the sense that the maxima and minima of fn[x, 5] should be on the same two levels. But due to high oscillation of the function, and the fact that clearly mma only takes limited number of points to draw the function, you see the plot exhibit strange behavior. Is there any option in plot to remedy for this? Many thanks.

    Read the article

  • R Plot Specify number of time tickmarks

    - by Cookie
    I was wondering how I could plot more tick marks when plotting time on the x-axis. Basically, a time equivalent to pretty. Pretty obviously doesn't work so well with times, as it uses factors of 1,2,5 and 10. For time one probably wants e.g. hours, half hours, ... plot(as.POSIXct(x,origin="1960-01-01"),y,type="l",xlab="Time") gives really too few and widely spaced tickmarks. zoox<-zoo(y,as.POSIXct(stats$Time,origin="1960-01-01")) plot(zoox) gives the same. Thanks

    Read the article

  • second y-axis on pcolor plot

    - by user1155751
    Is it possible to prodce a pcolor plot with 2 yaxis? Consider the following example: clear all temp = 1 + (20-1).*rand(365,12); depth = 1:12; time =1:365; data2 = 1 + (60-1).*rand(12,1); time2 = [28,56,84,124,150,184,210,234,265,288,312,342]; figure; pcolor(time,depth,temp');axis ij; shading interp hold on plot(time2,data2,'w','linewidth',3); Instead of plotting the second dataset on the same y axis I would like it to placed on its own y-axis. Is this possible?

    Read the article

  • Gnuplot, yerrorbar, text as x axis

    - by The Flying Dutchman
    I have the following data: t4.8k 1.84 1.86 1.83 t5.8k 1.82 1.84 1.8 t7.10k 1.79 1.8 1.77 t8.8k 1.8 1.84 1.76 I need to plot this in GNU plot using yerror bars. Column1 - dataset name. This is the xaxis scale. Column2 - Y-Mean Column3 - Y-Max Column4 - Y-Min Here is the plotting code that I use: plot "chameleonConfidence.dat" using xtic(1):2:4:3 title "Ratio of Time Taken" with yerrorbars But this gives me the following error Warning: empty x range [4.94066e-324:4.94066e-324], adjusting to [4.94066e-324:4.94066e-324] "chameleonConfidence.gplot", line 15: x_min should not equal x_max! Can someone help me with this?

    Read the article

  • What is so great about Visual Studio?

    - by Paperflyer
    In my admittedly somewhat short time as programmer, I have used many development environments on many platforms. Most notably, Eclipse/Linux, XCode/OSX, CLI/editor/Linux, VisualDSP/Blackfin/Windows and MSVC/Windows. (I used each one for several months) There are neat features in pretty much all of them. But somehow, I just can't find any in MSVC. Then again, so many people really seem to like it, so I am probably missing something here. So please tell me: What is so great about Visual Studio? Things I like: Refactoring tools in Eclipse Build error highlighting in XCode and Eclipse Edit-all-in-Scope in XCode Profiler in XCode Flexibility of Eclipse and CLI/editor Data plotting in VisualDSP Things I don't like Build error display in MSVC (not highlighted in code) Honestly, this is not meant to be a rant. Of course I am a Mac-head and biased as hell, but I have to use MSVC on the job, so I really want to like it.

    Read the article

  • Is there any way that I can draw fast a chart with too many series? [on hold]

    - by Maryam Bagheri
    I have a project that need to draw a fast chart with many series. I must change start position of drawing to a favor position when user want. .Net Chart is slow in lots of series. I wanted to use this library: http://www.codeproject.com/Articles/32836/A-simple-C-library-for-graph-plotting?msg=4109664#xx4109664xx but loading speed is too slow when count of series increases. When I plot the chart, I must invalidate the chart for updates, drawing speed will be too slow if there are too many series. I need at least 900 series in a chart for this project. I wanted to use multiple layer series to decrease invalidate rate with the idea: http://www.codeproject.com/Articles/84833/Drawing-multiple-layers-without-flicker I do not know this idea would be helpful. I tested LightningChart control but it could not help me.

    Read the article

  • Writing a custom wpf 'rolling' plot control (are there any components with such functionality?)

    - by adrin
    I need a wpf plot control that would 'roll' (scroll) as data is fed to it (instead of stretching, etc). Unfortunately i didnt find a working component so now I am considering writing my own control (its plotting features need to be simple). Did anyone write a custom plot/chart control? Is it difficult? What container should I use (Canvas & ViewBox?) so that it is reasonably fast, easy to write and scales as window is resized?

    Read the article

  • Make a simulation using python

    - by user3727759
    I am new to programming and using python. What I am trying to do is create a simulation of a thermostat system by using python. Is there a way to create a program that I can input data, for example temperature and humidity values and then have python constantly plotting the data as I enter the values. This is to simulate a device gathering data and sending it to this program and having it being plotted. I have found ways to plot data by using matplotlib but I have not been able to find a way that I can input the data and have the plot upgrade constantly. Thanks any advise is appreciated.

    Read the article

  • rails summing column values of rows with similar attributes

    - by butterywombat
    Hi all, I have a Sites table that has columns name, and time. The name does not have to be unique. So for example I may have the entries 'hi.com, 5', 'hi.com, 10', 'bye.com, 4'. I would like to sum up all the unique sites so that i get 'hi.com, 15' and 'bye.com, 4' for plotting purposes. How can I do that? (For some reference I was looking at http://railscasts.com/episodes/223-charts but I couldn't get the following (translated to my table) to work def self.total_on(date) where("date(purchased_at) = ?", date).sum(:total_price) end nor do I really understand the syntax of the 'where("date(purchased_at) = ?", date)' part. Thanks for helping a rails newbie!

    Read the article

  • NVidia with Optimus conflicting in Ubuntu 12.04

    - by Humannoise
    i have recently installed Ubuntu 12.04 in a Intel Ivy Bridge with integrated graphics and NVidia GPU with Optimus tech, however i cant manage it to work properly. I have already passed by the solution of bumblebee project, however iam got the following message when try to run anything with nvidia card( e.g. with optirun firefox): [ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect. [ERROR]Could not connect to bumblebee daemon - is it running? Since the nvidia card is not working properly, some softwares like Scilab, that make use of X11 system for graphic handling and plotting, wont work too. my bios has no option concerning graphics card and the log of daemon returned: Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[980]: Module 'nvidia' is not found. Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943272] init: bumblebeed main process (980) terminated with status 1 Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943288] init: bumblebeed main process ended, respawning Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[1026]: Module 'nvidia' is not found. The lspci -nn | grep '\[030[02]\]:' returned: 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) Ok, for the command dpkg -l | grep '^ii' | grep nvidia i got : ii bumblebee-nvidia 3.0-2~preciseppa1 nVidia Optimus support using the proprietary NVIDIA driver ii nvidia-current 302.17-0ubuntu1~precise~xup1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-current-updates 295.49-0ubuntu0.1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-settings 302.17-0ubuntu1~precise~xup3 Tool of configuring the NVIDIA graphics driver ii nvidia-settings-updates 295.33-0ubuntu1 Tool of configuring the NVIDIA graphics driver After full reinstallation, including the remove of any previous nvidia drive, lsmod | grep -E 'nvidia|nouveau' returned: nvidia 10888310 46 dmesg | grep -C3 -E 'nouveau|NVRM' returned things like: [ 1875.607283] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1875.607289] nvidia 0000:01:00.0: setting latency timer to 64 [ 1875.607293] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=none [ 1875.607363] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 302.17 Tue Jun 12 16:03:22 PDT 2012 [ 1884.830035] nvidia 0000:01:00.0: PCI INT A disabled [ 1884.832058] bbswitch: disabling discrete graphics [ 1884.832960] bbswitch: Result of Optimus _DSM call: 09000019 Some programs, like Scilab, are now working fine under optirun(e.g. >optirun scilab) call. Thank you.

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Multi-threaded JOGL Problem

    - by moeabdol
    I'm writing a simple OpenGL application in Java that implements the Monte Carlo method for estimating the value of PI. The method is pretty easy. Simply, you draw a circle inside a unit square and then plot random points over the scene. Now, for each point that is inside the circle you increment the counter for in points. After determining for all the random points wither they are inside the circle or not you divide the number of in points over the total number of points you have plotted all multiplied by 4 to get an estimation of PI. It goes something like this PI = (inPoints / totalPoints) * 4. This is because mathematically the ratio of a circle's area to a square's area is PI/4, so when we multiply it by 4 we get PI. My problem doesn't lie in the algorithm itself; however, I'm having problems trying to plot the points as they are being generated instead of just plotting everything at once when the program finishes executing. I want to give the application a sense of real-time display where the user would see the points as they are being plotted. I'm a beginner at OpenGL and I'm pretty sure there is a multi-threading feature built into it. Non the less, I tried to manually create my own thread. Each worker thread plots one point at a time. Following is the psudo-code: /* this part of the code exists in display() method in MyCanvas.java which extends GLCanvas and implements GLEventListener */ // main loop for(int i = 0; i < number_of_points; i++){ RandomGenerator random = new RandomGenerator(); float x = random.nextFloat(); float y = random.nextFloat(); Thread pointThread = new Thread(new PointThread(x, y)); } gl.glFlush(); /* this part of the code exists in run() method in PointThread.java which implements Runnable */ void run(){ try{ gl.glPushMatrix(); gl.glBegin(GL2.GL_POINTS); if(pointIsIn) gl.glColor3f(1.0f, 0.0f, 0.0f); // red point else gl.glColor3f(0.0f, 0.0f, 1.0f); // blue point gl.glVertex3f(x, y, 0.0f); // coordinates gl.glEnd(); gl.glPopMatrix(); }catch(Exception e){ } } I'm not sure if my approach to solving this issue is correct. I hope you guys can help me out. Thanks.

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How to Use Sparklines in Excel 2010

    - by DigitalGeekery
    One of the cool features of Excel 2010 is the addition of Sparklines. A Sparkline is basically a little chart displayed in a cell representing your selected data set that allows you to quickly and easily spot trends at a glance. Inserting Sparklines on your Spreadsheet You will find the Sparklines group located on the Insert tab.   Select the cell or cells where you wish to display your Sparklines. Select the type of Sparkline you’d like to add to your spreadsheet. You’ll notice there are three types of Sparklines, Line, Column, and Win/Loss. We’ll select Line for our example. A Create Sparklines pops up and will prompt you to enter a Data Range you are using to create the Sparklines. You’ll notice that the location range (the range where the Sparklines will appear) is already filled in. You can type in the data range manually, or click and drag with your mouse across to select the data range. This will auto-fill the data range for you. Click OK when you are finished.   You will see your Sparklines appear in the desired cells.   Customizing Sparklines Select the one of more of the Sparklines to reveal the Design tab. You can display certain value points like high and low points, negative points, and first and last points by selecting the corresponding options from the Show group. You can also mark all value points by selecting  Markers. Select your desired Sparklines and click one of the included styles from the Style group on the Design tab. Click the down arrow on the lower right corner of the box to display additional pre-defined styles…   or select Sparkline Color or Marker Color options to fully customize your Sparklines. The Axis options allow additional options such as Date Axis Type, Plotting Data Left to Right, and displaying an axis point to represent the zero line in your data with Show Axis. Column Sparklines Column Sparklines display your data in individual columns as opposed to the Line view we’ve been using for our examples. Win/Loss Sparklines Win/Loss shows a basic positive or negative representation of your data set.   You can easily switch between different Sparkline types by simply selecting the current cells (individually or the entire group), and then clicking the desired type on the Design tab. For those that may be more visually oriented, Sparklines can be a wonderful addition to any spreadsheet. Are you just getting started with Office 2010? Check out some of our other great Excel posts such as how to copy worksheets, print only selected areas of a spreadsheet, and how to share data with Excel in Office 2010. Similar Articles Productive Geek Tips Convert a Row to a Column in Excel the Easy WayShare Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • add shapes to bing maps from locations stored in a database (bing maps ajax control)

    - by macou
    I am trying to use the Bing Maps Ajax Control to plot pins of locations stored in a database to the bing map on a web page. All the locations are geocoded and the lat longs stored in the database. I am using ASP.NET (C#), but can't figure out or find any tutorials on how to go about doing this. All I can find are articles on how to import shapes into a map from either GeoRSS, Bing Maps, and KML. I have used (and paid for ;o) the excellent control from Simplovations to do alot of what I need to do, namely working with my data as normal in the code behind, getting a DataSet of my locations and plotting the points to the map. It has been great, but I want to know how to do it with out using a third party control. My main reason for wanting this is to be able to cluster my pins and hopefully learn a bit of Javascript along the way. Does anyone know of any tutorials or articles online that can help me on my way. I have been searching the net for hours now and can't find anything :(

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

  • matplotlib.pyplot/pylab not updating figure while isinteractive(), using ipython -pylab

    - by NumberOverZero
    There are a lot of questions about matplotlib, pylab, pyplot, ipython, so I'm sorry if you're sick of seeing this asked. I'll try to be as specific as I can, because I've been looking through people's questions and looking at documentation for pyplot and pylab, and I still am not sure what I'm doing wrong. On with the code: Goal: plot a figure every .5 seconds, and update the figure as soon as the plot command is called. My attempt at coding this follows (running on ipython -pylab): import time ion() x=linspace(-1,1,51) plot(sin(x)) for i in range(10): plot([sin(i+j) for j in x]) #see ** print i time.sleep(1) print 'Done' It correctly plots each line, but not until it has exited the for loop. I have tried forcing a redraw by putting draw() where ** is, but that doesn't seem to work either. Ideally, I'd like to have it simply add each line, instead of doing a full redraw. If redrawing is required however, that's fine. Additional attempts at solving: just after ion(), tried adding hold(True) to no avail. for kicks tried show() for ** The closest answer I've found to what I'm trying to do was at http://stackoverflow.com/questions/2310851/plotting-lines-without-blocking-execution, but show() isn't doing anything. I apologize if this is a straightforward request, and I'm looking past something so obvious. For what it's worth, this came up while I was trying to convert matlab code from class to some python for my own use. The original matlab (initializations removed) which I have been trying to convert follows: for i=1:time plot(u) hold on pause(.01) for j=2:n-1 v(j)=u(j)-2*u(j-1) end v(1)= pi u=v end Any help, even if it's just "look up this_method" would be excellent, so I can at least narrow my efforts to figuring out how to use that method. If there's any more information that would be useful, let me know.

    Read the article

  • Trend analysis using iterative value increments

    - by Dave Jarvis
    We have configured iReport to generate the following graph: The real data points are in blue, the trend line is green. The problems include: Too many data points for the trend line Trend line does not follow a Bezier curve (spline) The source of the problem is with the incrementer class. The incrementer is provided with the data points iteratively. There does not appear to be a way to get the set of data. The code that calculates the trend line looks as follows: import java.math.BigDecimal; import net.sf.jasperreports.engine.fill.*; /** * Used by an iReport variable to increment its average. */ public class MovingAverageIncrementer implements JRIncrementer { private BigDecimal average; private int incr = 0; /** * Instantiated by the MovingAverageIncrementerFactory class. */ public MovingAverageIncrementer() { } /** * Returns the newly incremented value, which is calculated by averaging * the previous value from the previous call to this method. * * @param jrFillVariable Unused. * @param object New data point to average. * @param abstractValueProvider Unused. * @return The newly incremented value. */ public Object increment( JRFillVariable jrFillVariable, Object object, AbstractValueProvider abstractValueProvider ) { BigDecimal value = new BigDecimal( ( ( Number )object ).doubleValue() ); // Average every 10 data points // if( incr % 10 == 0 ) { setAverage( ( value.add( getAverage() ).doubleValue() / 2.0 ) ); } incr++; return getAverage(); } /** * Changes the value that is the moving average. * @param average The new moving average value. */ private void setAverage( BigDecimal average ) { this.average = average; } /** * Returns the current moving average average. * @return Value used for plotting on a report. */ protected BigDecimal getAverage() { if( this.average == null ) { this.average = new BigDecimal( 0 ); } return this.average; } /** Helper method. */ private void setAverage( double d ) { setAverage( new BigDecimal( d ) ); } } How would you create a smoother and more accurate representation of the trend line?

    Read the article

  • Plot in GNU PLOT

    - by guddi
    I have to plot many lines in GNU PLOT.No problem with the X axis. The problem that I am facing is that most of the plotted lines are at yscale [0-0.05] ,few at range 60-70 and rest at 600-700. These numbers correspond to the y axis scale values. But after I plot I can see only 3 sets of lines all messed up. There is no clearity between the lines. Line at 0 and the line at 0.003 look like one single line. If I set yrange[0:0.05], the lines between this range are clearly vissible. But I want all the lines in the same graph. I have heard of breaking axis's and multi plotting..Can they be useful? how to implement them. Anyone pls help me. Below is the sript set terminal png size 1300,1200 enhanced font 'Verdana,20 set output ' output .png’ set key font 'Verdana,16' set key bottom outside set yrange[500:1000] set xtics("25k" 25000,"50k" 50000,"75k" 75000,"100k" 100000) set grid set title 'Performance Metrics' set ylabel 'Metrices' set xlabel 'FES' plot ' input ' using 1:2 title ' A' with linespoints linewidth 4, ' input ' using 1:3 title B'with linespoints linewidth 4, 'input ' using 1:4 title ' c' with linespoints linewidth 4, 'input ' using 1:5 title 'D' with linespoints linewidth 4, ' input ' using 1:6 title 'E' with linespoints linewidth 4, ' input ' using 1:7 title 'F' with linespoints linewidth 4, ' input ' using 1:8 title 'G' with linespoints linewidth 4, ' input ' using 1:9 title ' H ' with linespoints linewidth 4, ' input ' using 1:10 title ' I' Metric ' with linespoints linewidth 4 set output set terminal windows input.dat is something like this: 25 0.002 0.05 899 455 444 0.08 0.00004 900 700 0.003 This way i have other rows. I have shown only the first one

    Read the article

  • Using R in Processing through rJava/JRI?

    - by tommy-o-dell
    Hi guys, Is it possible to run R in Processing through rJava/JRI? If I deployed a Processing app on the web, would the client need R on their system? I'm looking to create an interactive information dashboard that I can deploy on the web. It seems that Processing is probably my best bet for the interactive/web part of things. Unfortunately, it doesn't look like there are many math/stats functions built-in. And there aren't any libraries for plotting data either. I've been using R and gpplot2 for a few months and am thrilled (amazed) at how easily it manipulates and plots data. So I'm wondering now if can get the best of both worlds and run R through a Processing applet. From the JRI website: JRI is a Java/R Interface, which allows to run R inside Java applications as a single thread. Basically it loads R dynamic library into Java and provides a Java API to R functionality. It supports both simple calls to R functions and a full running REPL. In a sense JRI is the inverse of rJava and both can be combined (i.e. you can run R code inside JRI that calls back to the JVM via rJava). The JGR project makes the full use of both JRI and rJava to provide a full Java GUI for R. JRI uses native code, but it supports all platforms where Sun's Java (or compatible) is available, including Windows, Mac OS X, Sun and Linux (both 32-bit and 64-bit). Thanks for the advice :)

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >