Search Results

Search found 812 results on 33 pages for 'computational geometry'.

Page 16/33 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • CodePlex Daily Summary for Wednesday, July 30, 2014

    CodePlex Daily Summary for Wednesday, July 30, 2014Popular ReleasesGhostscript.NET: Ghostscript.NET v.1.1.9.: v.1.1.9. fixed problem with the PDF invisible layers (the optional content groups which will be left unmarked if processtrailerattrs is not executed). fixed text rasterization problem for some pdf's, it seems that the 'pdfopen begin' did not initialize everything required to render pdf properly so we replaced it with the 'runpdfopen' method which corrects everything (problem reported by "xatabhk"). changed GhostscriptRasterizer methods to support Stream insted of the MemoryStream. fixed...Recaptcha for .NET: Recaptcha for .NET v1.5: What's NewMinor bug fixes Support for legacy .NET framework 4.0 and ASP.NET MVC 4. Support for .NET Framework 4.5.1.Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityTEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleSharePoint 2010 & 2013 Google Maps V3 WebPart: SPGoogleMap webpart - SharePoint 2013 - July 2014: Google API key support added. The webpart does not need it but if you have one you can use it.QuieNet: Version 2.0: Replaced autoplay prevention mechanism: instead of replacing the player itself, only the function that starts the player is replaced. This only works for video players for now, and live streams are handled as before.XamlImageConverter: Xaml Image Converter 3.11: Improvements: - ASP.NET 64bit support for html2pdf. - Attribute to suppress parallel execution. - Ghostscript rendering. - No need for a snapshot for a imagemap, you can use original svg image.Automatic Parallel Computing: APC SDK 2.1: Features: integration with Amazon DynamoDB. Includes: Investment Club Benchmark Investor Ranking BenchmarkCatchException (Manage Exception): CatchMe Exception Version 1.0: Code SourceDnnFoundation Skin for DnnCMS: DnnC DnnFoundation Skin: First release of the DnnC DnnFoundation SkinQND Operations Manager SNMP Monitoring: QND.SNMP.Library version 1.0.0.103: fixed bug #1815FlMML customized: FlMML customized c.s.30938: ????????LFO、?????LFO??????。 ???·Y·????LFO、??????????????????。 ???LFO????????????????????。CS-Script Source: Release v3.8.4: CSScript.Evaluator is migrated to Mono v3.3.0 Added aggregating //css_ignore_ns from the imported scripts cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationDotSpatial: DotSpatial 1.7: DotSpatial.Full - includes all DotSpatial libraries, extensions and DemoMap application DotSpatial.Core - includes only DotSpatial core libraries Entire list of changes see in the issue tracker. Main changes: Improved common stability, optimized memory and speed when loading and rendering shapefiles, fixed some memory leaks in rasters and shape layers. Simplified plugin infrastructure. Now there are predefined implementations for all required components (IStatusControl, IDockManager, IHead...New ProjectsAdmin QuikView for Dynamics CRM 2013: Admin QuickView is a gives you a quick hierarchical snapshot of all active Business Units, Teams, Security Roles and Users in your Dynamics CRM Organization.All bots: Windows Phone app to chat with bots from http://www.pandorabots.com/Calculator Proj: Calculator with power hexadecimal binary option and more... Computational Network Toolkit (CNTK): Computational networks (CNs) generalize models that can be described as a series of computational steps such as DNN, CNN, RNN, LSTM, and maximum entropy models.DelegateExpressionizer: Library is intended to decompile delegate code at runtime and build and appropriate expression tree.Hystaspes: Logistics Management System: Logistics Management SystemKinect Experiments: This repository contains various code samples, proof-of-concepts and utilities for Kinect for Windows (v1.8 and v2)ppcs: Placement Project Version 1.0.0.0Reusable MVC Partial View with JQuery Datatables: Reusable jQuery DataTables integrated with in a Partial View of MVC 4.0 is a reusable control/view written entirely in C# and JQuery, my aim was to create a MVCShared Code Project Template for Visual Studio 2013 Update 2: This Project contains the source code for a Shared Project template for VS 2013 which extends the shareing of code to all Project types besides universal apps. Text word search: This project is a sample for searching words in a notepad or a word file through wpf platform. Trace And Watch: Trace And Watch is a Pintool developed to assist in finding 32-bit integer errors.Vtron Automatic tester screen parameters: VTRONWalli: Walli is an open source SalesForce integrated developer environment(IDE) application written in .NET.Windows Kirlian WiFi App: Illustrates the radio field generated by WiFiWOMPS - What's new On My PLEX Server: Purpose of this project is to send an user a weekly or daily email of all new content add to their Plex library.

    Read the article

  • Visual State Manager in WPF not working for me

    - by Román
    Hi In a wpf project I have this XAML code <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:ic="clr-namespace:Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions" x:Class="WpfApplication1.MainWindow" xmlns:vsm="clr-namespace:System.Windows;assembly=WPFToolkit" x:Name="Window" Title="MainWindow" Width="640" Height="480"> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="VisualStateGroup"> <vsm:VisualState x:Name="Loading"> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="control" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="button" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="button1" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </vsm:VisualState> <VisualState x:Name="Normal"> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="control" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <Grid x:Name="LayoutRoot"> <Grid.Resources> <ControlTemplate x:Key="loadingAnimation"> <Image x:Name="content" Opacity="1"> <Image.Source> <DrawingImage> <DrawingImage.Drawing> <DrawingGroup> <GeometryDrawing Brush="Transparent"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,1,1"/> </GeometryDrawing.Geometry> </GeometryDrawing> <DrawingGroup> <DrawingGroup.Transform> <RotateTransform x:Name="angle" Angle="0" CenterX="0.5" CenterY="0.5"/> </DrawingGroup.Transform> <GeometryDrawing Geometry="M0.9,0.5 A0.4,0.4,90,1,1,0.5,0.1"> <GeometryDrawing.Pen> <Pen Brush="Green" Thickness="0.1"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Brush="Green" Geometry="M0.5,0 L0.7,0.1 L0.5,0.2"/> </DrawingGroup> </DrawingGroup> </DrawingImage.Drawing> </DrawingImage> </Image.Source> </Image> <ControlTemplate.Triggers> <Trigger Property="Visibility" Value="Visible"> <Trigger.EnterActions> <BeginStoryboard x:Name="animation"> <Storyboard> <DoubleAnimation From="0" To="359" Duration="0:0:1.5" RepeatBehavior="Forever" Storyboard.TargetName="angle" Storyboard.TargetProperty="Angle"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="animation"/> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Grid.Resources> <Grid.ColumnDefinitions> <ColumnDefinition MinWidth="76.128" Width="Auto"/> <ColumnDefinition MinWidth="547.872" Width="Auto"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="0.05*"/> <RowDefinition Height="0.95*"/> </Grid.RowDefinitions> <Button x:Name="button" Margin="0,0,1,0.04" Width="100" Content="Load" d:LayoutOverrides="Height" Click="Button_Click"/> <Button x:Name="button1" HorizontalAlignment="Left" Margin="0,0,0,0.04" Width="100" Content="Stop" Grid.Column="1" d:LayoutOverrides="Height" Click="Button2_Click" Visibility="Collapsed"/> <Control x:Name="control" Margin="10" Height="100" Grid.Row="1" Grid.ColumnSpan="2" Width="100" Template="{DynamicResource loadingAnimation}" Visibility="Collapsed"/> </Grid> </Window> and the following code behind on the window public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); } private void Button1_Click(object sender, System.Windows.RoutedEventArgs e) { VisualStateManager.GoToState(this, "Loading", true); } private void Button2_Click(object sender, System.Windows.RoutedEventArgs e) { VisualStateManager.GoToState(this, "Normal", true); } } However, when I click the first button (button1) the state change is not being triggered. What am I doing wrong? Thanks in advance

    Read the article

  • fitbounds() in Google maps api V3 does not fit bounds

    - by jul
    hi, I'm using the geocoder from Google API v3 to display a map of a country. I get the recommended viewport for the country but when I want to fit the map to this viewport, it does not work (see the bounds before and after calling the fitBounds function in the code below). What am I doing wrong? How can I set the viewport of my map to results[0].geometry.viewport? thanks jul var geocoder = new google.maps.Geocoder(); if(geocoder) { geocoder.geocode({'address': '{{countrycode}}'}, function(results, status) { var bounds = new google.maps.LatLngBounds(); bounds = results[0].geometry.viewport; console.log(bounds); //((35.173, -12.524), (45.244, 5.098)) console.log(map.getBounds()); //((34.628757195038844, -14.683750000000012), (58.28355197864712, 27.503749999999986)) map.fitBounds(bounds); console.log(map.getBounds()); //((25.740113512090183, -24.806750000000005), (52.44233664523719, 17.380749999999974)) }); } }

    Read the article

  • How to process images with paperclip on Heroku?

    - by Yuri
    I use Heroku for my app. I want to auto-orient image and then to resize it. So I do: class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end It does not work with error: There was an error processing the thumbnail for stream.20143 What am I doing wrong?

    Read the article

  • Trying to issue a mouseclick event on a QWebElement in QWebView QtWebKit

    - by Bad Man
    I'm trying to send a mouse click event on a certain element in the QtWebKit DOM but I'm obviously doing it wrong: QWebElement el=this->page()->mainFrame()->findFirstElement("input[type=submit]"); el.setFocus(); QMouseEvent pressEvent(QMouseEvent::MouseButtonPress, el.geometry().center(), Qt::MouseButton::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(this->page()->mainFrame(), &pressEvent); QMouseEvent releaseEvent(QMouseEvent::MouseButtonRelease, el.geometry().center(), Qt::MouseButton::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(this->page()->mainFrame(), &releaseEvent); Any ideas? (p.s. I am extending QWebView if it wasn't obvious)

    Read the article

  • UTF-8 formatting in SPARQL

    - by john
    How can I "say" to SPARQL that ?churchname is in UTF-8 formatting? because response is like:Pražský hrad PREFIX lgv: <http://linkedgeodata.org/vocabulary#> PREFIX abc: <http://dbpedia.org/class/yago/> SELECT ?churchname WHERE { <http://dbpedia.org/resource/Prague> geo:geometry ?gm . ?church a lgv:castle . ?church geo:geometry ?churchgeo . ?church lgv:name ?churchname . FILTER ( bif:st_intersects (?churchgeo,?gm, 10)) } GROUP BY ?churchname ORDER BY ?churchname

    Read the article

  • Rendering spatial data of GeoQuerySet in a custom view on GeoDjango

    - by dmytro
    Hello. I have just started my first project on GeoDjango. As a matter of fact, with GeoDjango powered Admin application we all have a great possibility to view/edit spatial data, associated with the current object. The problem is that after the objects having been populated I need to render several objects' associated geometry at once on a single map. I might implement it as a model action, redirecting to a custom view. I just don't know, how to include the OpenLayers widget in the view and how to render there my compound geometry from my GeoQuerySet. I would be very thankful for any hint from an experienced GeoDjango programmer.

    Read the article

  • Google's Geocoder returns wrong country

    - by 6bytes
    Hi I'm using Google's Geocoder to find lat lng coordinates for a given address. var geocoder = new google.maps.Geocoder(); geocoder.geocode( { 'address': address, 'region': 'uk' }, function(results, status) { if(status == google.maps.GeocoderStatus.OK) { lat: results[0].geometry.location.lat(), lng: results[0].geometry.location.lng() }); address variable is taken from an input field. I want to search locations only in UK. I thought that specifying 'region': 'uk' should be enough but it's not. When I type in "Boston" it founds Boston in US and I wanted the one in UK. How to restrict Geocoder to return locations only from one country or maybe from a certain lat lng range? Thanks

    Read the article

  • Javascript closures with google geocoder

    - by DaNieL
    Hi all, i still have some problems with javascript closures, and input/output variables. Im playing with google maps api for a no profit project: users will place the marker into a gmap, and I have to save the locality (with coordinates) in my db. The problem comes when i need to do a second geocode in order to get a unique pairs of lat and lng for a location: lets say two users place the marker in the same town but in different places, I dont want to have the same locality twice in the database with differents coords. I know i can do the second geocode after the user select the locality, but i want to understand what am i mistaking here: // First geocoding, take the marker coords to get locality. geocoder.geocode( { 'latLng': new google.maps.LatLng($("#lat").val(), $("#lng").val()), 'language': 'it' }, function(results_1, status_1){ // initialize the html var inside this closure var html = ''; if(status_1 == google.maps.GeocoderStatus.OK) { // do stuff here for(i = 0, geolen = results_1[0].address_components.length; i != geolen) { // Second type of geocoding: for each location from the first geocoding, // i want to have a unique [lat,lan] geocoder.geocode( { 'address': results_1[0].address_components[i].long_name }, function(results_2, status_2){ // Here come the problem. I need to have the long_name here, and // 'html' var should increment. coords = results_2[0].geometry.location.toUrlValue(); html += 'some html to let the user choose the locality'; } ); } // Finally, insert the 'html' variable value into my dom... //but it never gets updated! } else { alert("Error from google geocoder:" + status_1) } } ); I tryed with: // Second type of geocoding: for each location from the first geocoding, i want // to have a unique [lat,lan] geocoder.geocode( { 'address': results_1[0].address_components[i].long_name }, (function(results_2, status_2, long_name){ // But in this way i'll never get results_2 or status_2, well, results_2 // get long_name value, status_2 and long_name is undefined. // However, html var is correctly updated. coords = results_2[0].geometry.location.toUrlValue(); html += 'some html to let the user choose the locality'; })(results_1[0].address_components[i].long_name) ); And with: // Second type of geocoding: for each location from the first geocoding, i want to have // a unique [lat,lan] geocoder.geocode( { 'address': results_1[0].address_components[i].long_name }, (function(results_2, status_2, long_name){ // But i get an obvious "results_2 is not defined" error (same for status_2). coords = results_2[0].geometry.location.toUrlValue(); html += 'some html to let the user choose the locality, that can be more than one'; })(results_2, status_2, results_1[0].address_components[i].long_name) ); Any suggestion? EDIT: My problem is how to pass an additional arguments to the geocoder inner function: function(results_2, status_2, long_name){ //[...] } becose if i do that with a clousure, I mess with the original parameters (results_2 and status_2)

    Read the article

  • OpenLayers .containsPoint after pan

    - by Adrian
    I've seem to have hit a bug or i have overlooked something. I written some code that enumerates through all the vector features on a OpenLayers Vector layer - to check if the mouse is inside a vector feature - if so then it displays some info based on the feature. I had to write my own methods to do this because the existing OpenLayers Controls( select etc) stop after finding a feature under the mouse, and i the possibility of several features being stacked on top of one another. My problem is that the .containsPoint method seems to be using coords from before a 'pan'. After zooming in or out the geometry seems to be in the right place and .containsPoint is works correctly when I wave the mouse over the map. Do I need to do something after the map has been panned to update something( feature's geometry)

    Read the article

  • How to get lng lat value from query results of geoalchemy2

    - by user2213606
    For exammple, class Lake(Base): __tablename__ = 'lake' id = Column(Integer, primary_key=True) name = Column(String) geom = Column(Geometry('POLYGON')) point = Column(Geometry('Point')) lake = Lake(name='Orta', geom='POLYGON((3 0,6 0,6 3,3 3,3 0))', point="POINT(2 9)") query = session.query(Lake).filter(Lake.geom.ST_Contains('POINT(4 1)')) for lake in query: print lake.point it returned <WKBElement at 0x2720ed0; '010100000000000000000000400000000000002240'> I also tried to do lake.point.ST_X() but it didn't give the expected latitude neither What is the correct way to transform the value from WKBElement to readable and useful format, say (lng, lat)? Thanks

    Read the article

  • Adding ID's to google map markers

    - by Nick
    I have a script that loops and adds markers one at a time. I am trying to get the current marker to have an info window and and only have 5 markers on a map at a time (4 without info windows and 1 with) How would i add an id to each marker so that i can delete and close info windows as needed. this is the function i am using to set the marker function codeAddress(address, contentString) { var infowindow = new google.maps.InfoWindow({ content: contentString }); if (geocoder) { geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { map.setCenter(results[0].geometry.location); var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location }); infowindow.open(map,marker); } else { alert("Geocode was not successful for the following reason: " + status); } }); } }

    Read the article

  • Google maps spatial reference system

    - by JavaRocky
    What is Google map's spatial reference system using when you enter a lat, long into the maps search bar? I've found hints that it might be WGS84 but after converting to that coordinate system, nothing shows up when i paste the coordinates into the google maps search box. I am converting from GDA MGA 56. Sample: Input MGA56 coords: 336301, 6253363 Expected WGS86 coords: -33.8473340793201, 151.230631835944 I get: 16834916.928327594 -4008321.1020318186 Spatial coord systems: EPSG:28356 for MGA56 EPSG:900913 for WGS86 (google maps) I am using geotools to do the transform: CoordinateReferenceSystem crsMga56 = CRS.parseWKT(mga56); CoordinateReferenceSystem crsGmaps = CRS.parseWKT(gmaps); Coordinate coordinate = new Coordinate(336301, 6253363); Point point = new GeometryFactory().createPoint(coordinate); MathTransform transform = CRS.findMathTransform(crsMga56, crsGmaps); Geometry geometry = JTS.transform(point, transform); I know the transform is not correct, as when i use an online tool it gives me the correct coords. http://www.environment.gov.au/cgi-bin/transform/mga2geo%5Fgda.pl?east=336301&north=6253363&zone=56

    Read the article

  • Which is faster: creating a detailed mesh before execution or tessellating?

    - by Nick Udell
    For simplicity of the problem let's consider spheres. Let's say I have a sphere, and before execution I know the radius, the position and the triangle count. Let's also say the triangle count is sufficiently large (e.g. ~50k triangles). Would it be faster generally to create this sphere mesh before hand and stream all 50k triangles to the graphics card, or would it be faster to send a single point (representing the centre of the sphere) and use tessellation and geometry shaders to build the sphere on the GPU? Would it still be faster if I had 100 of these spheres in different positions? Can I use hull/geometry shaders to create something which I can then combine with instancing?

    Read the article

  • Include headers in header file?

    - by Mohit Deshpande
    I have several libraries made by myself (a geometry library, a linked list library, etc). I want to make a header file to include them all in one lib.h. Could I do something like this: #ifndef LIB_H_ #define LIB_H_ #include <stdio.h> #include <stdlib.h> #include <linkedlist.h> #include <geometry.h> .... #endif Then I could just reference this one library and actually reference multiple libraries. Is this possible? If not, is there a way around it?

    Read the article

  • How to determine the radius and center of a circle when only three noncollinear points are known?

    - by Bob
    I'm working on a C# program that deals with Oracle Spatial geometry. When circle data is stored in a geometry field only three non-collinear points are stored to represent the circle. The problem is that I need to use this data on a Google Maps web page and need the center point and radius of the circle (since my circle drawing function uses that information). Can anyone help with the math involved and translating said math to C#? I think this page may hold the answer, but I'm having a hard time following it. There are formulas for radius and center given three points, but then they define the variables as matrices and I get lost at that point. How would I solve that in code?

    Read the article

  • Google maps api v3: geocoding multiple addresses and infowindow

    - by user2536786
    I am trying to get infowindow for multiple addresses. Its creating markers but when I click on markers, infowindow is not popping up. Please help and see what could be wrong in this code. Rest all info is fine only issue is with infowindow not coming up. <!DOCTYPE html> <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <title>Google Maps Multiple Markers</title> <script src="http://maps.google.com/maps/api/js?sensor=false" type="text/javascript"></script> </head> <body> <div id="map" style="height: 800px;"></div> <script type="text/javascript"> var locations = [ ['Bondi Beach', '850 Bay st 04 Toronto, Ont'], ['Coogee Beach', '932 Bay Street, Toronto, ON M5S 1B1'], ['Cronulla Beach', '61 Town Centre Court, Toronto, ON M1P'], ['Manly Beach', '832 Bay Street, Toronto, ON M5S 1B1'], ['Maroubra Beach', '606 New Toronto Street, Toronto, ON M8V 2E8'] ]; var map = new google.maps.Map(document.getElementById('map'), { zoom: 10, center: new google.maps.LatLng(43.253205,-80.480347), mapTypeId: google.maps.MapTypeId.ROADMAP }); var infowindow = new google.maps.InfoWindow(); var geocoder = new google.maps.Geocoder(); var marker, i; for (i = 0; i < locations.length; i++) { geocoder.geocode( { 'address': locations[i][1]}, function(results, status) { //alert(status); if (status == google.maps.GeocoderStatus.OK) { //alert(results[0].geometry.location); map.setCenter(results[0].geometry.location); marker = new google.maps.Marker({ position: results[0].geometry.location, map: map }); google.maps.event.addListener(marker, 'mouseover', function() { infowindow.open(marker, map);}); google.maps.event.addListener(marker, 'mouseout', function() { infowindow.close();}); } else { alert("some problem in geocode" + status); } }); } </script> </body> </html>

    Read the article

  • paperclipt get error can't dump File when upload video in rails

    - by user3510728
    when i try to upload video using paperclipt, i get error message can't dump File? model video : class Video < ActiveRecord::Base has_attached_file :avatar, :storage => :s3, :styles => { :mp4 => { :geometry => "640x480", :format => 'mp4' }, :thumb => { :geometry => "300x300>", :format => 'jpg', :time => 5 } }, :processors => [:ffmpeg] validates_attachment_presence :avatar validates_attachment_content_type :avatar, :content_type => /video/, :message => "Video not supported" end when i try to create video, im get this error?

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database

    - by pinaldave
    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it. VDS Technologies has all the spatial files for every location for free. You can download the spatial file from here. If you cannot find the spatial file you are looking for, please leave a comment here, and I will send you the necessary details. Unzip the file to a folder and it will have the following content. Then, download Shape2SQL tool from SharpGIS. This is one of the best tools available to convert shapefiles to SQL tables. Afterwards, run the .exe file. When the file is run for the first time, it will ask for the database properties. Provide your database details. Select the appropriate shape files and the tool will fill up the essential details automatically. If you do not want to create the index on the column, uncheck the box beside it. The screenshot below is simply explains the procedure. You also have to be careful regarding your data, whether that is GEOMETRY or GEOGRAPHY. In this example,  it is GEOMETRY data. Click “Upload to Database”. It will show you the uploading process. Once the shape file is uploaded, close the application and open SQL Server Management Studio (SSMS). Run the following code in SSMS Query Editor. USE Spatial GO SELECT * FROM dbo.world GO This will show the complete map of world after you click on Spatial Results in Spatial Tab. In Spatial Results Set, the Zoom feature is available. From the Select label column, choose the country name in order to show the country name overlaying the country borders. Let me know if this tutorial is helpful enough. I am planning to write a few more posts about this later. Note: Please note that the images displayed here do not reflect the original political boundaries. These data are pretty old and can probably draw incorrect maps as well. I have personally spotted several parts of the map where some countries are located a little bit inaccurately. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Spatial, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How can I bend an object in OpenGL?

    - by mindnoise
    Is there a way one could bend an object, like a cylinder or a plane using OpenGL? I'm an OpenGL beginner (I'm using OpenGL ES 2.0, if that matters, although I suspect, math matters most in this case, so it's somehow version independent), I understand the basics: translate, rotate, matrix transformations, etc. I was wondering if there is a technique which allows you to actually change the geometry of your objects (in this case by bending them)? Any links, tutorials or other references are welcomed!

    Read the article

  • F# and ArcObjects, Part 3

    - by Marko Apfel
    Today i played a little bit with IFeature-sequences and piping data. The result was a calculator of the bounding box around all features in a feature class. Maybe a little bit dirty, but for learning was it OK. ;-) open System;; #I "C:\Program Files\ArcGIS\DotNet";; #r "ESRI.ArcGIS.System.dll";; #r "ESRI.ArcGIS.DataSourcesGDB.dll";; #r "ESRI.ArcGIS.Geodatabase.dll";; #r "ESRI.ArcGIS.Geometry.dll";; open ESRI.ArcGIS.esriSystem;; open ESRI.ArcGIS.DataSourcesGDB;; open ESRI.ArcGIS.Geodatabase;; open ESRI.ArcGIS.Geometry; let aoInitialize = new AoInitializeClass();; let status = aoInitialize.Initialize(esriLicenseProductCode.esriLicenseProductCodeArcEditor);; let workspacefactory = new SdeWorkspaceFactoryClass();; let connection = "SERVER=okul;DATABASE=p;VERSION=sde.default;INSTANCE=sde:sqlserver:okul;USER=s;PASSWORD=g";; let workspace = workspacefactory.OpenFromString(connection, 0);; let featureWorkspace = (box workspace) :?> IFeatureWorkspace;; let featureClass = featureWorkspace.OpenFeatureClass("Praxair.SFG.BP_L_ROHR");; let queryFilter = new QueryFilterClass();; let featureCursor = featureClass.Search(queryFilter, true);; let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() };; let min x y = if x < y then x else y;; let max x y = if x > y then x else y;; let info s (x : IEnvelope) = printfn "%s xMin:{%f} xMax: {%f} yMin:{%f} yMax: {%f}" s x.XMin x.XMax x.YMin x.YMax;; let con (env1 : IEnvelope) (env2 : IEnvelope) = let env = (new EnvelopeClass()) :> IEnvelope env.XMin <- min env1.XMin env2.XMin env.XMax <- max env1.XMax env2.XMax env.YMin <- min env1.YMin env2.YMin env.YMax <- max env1.YMax env2.YMax info "Intermediate" env env;; let feature = featureClass.GetFeature(100);; let ext = feature.Extent;; let BoundingBox featureClassName = let featureClass = featureWorkspace.OpenFeatureClass(featureClassName) let queryFilter = new QueryFilterClass() let featureCursor = featureClass.Search(queryFilter, true) let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() } featureCursorSeq featureCursor |> Seq.map (fun feature -> (!feature).Extent) |> Seq.fold (fun (acc : IEnvelope) a -> info "Intermediate" acc (con acc a)) ext ;; let boundingBox = BoundingBox "Praxair.SFG.BP_L_ROHR";; info "Ende-Info:" boundingBox;;

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >