Search Results

Search found 3287 results on 132 pages for 'bing maps'.

Page 108/132 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • A typical lifecycle of a Hibernate object in a web app - ?

    - by EugeneP
    Describe please a typical lifecycle of a Hibernate object (that maps to a db table) in a web app. Suppose, you create a new instance of an object and persist in the db. But during the app lifetime you'll be working on a detached object and finally you need to update it in the database, for example on exit. How does it look like with hibernate and spring? p.s. Can transactions and sessions live between servlet transitions? So that we opened 1 session and use it in all servlets without a need to reopen it?

    Read the article

  • SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database

    - by pinaldave
    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it. VDS Technologies has all the spatial files for every location for free. You can download the spatial file from here. If you cannot find the spatial file you are looking for, please leave a comment here, and I will send you the necessary details. Unzip the file to a folder and it will have the following content. Then, download Shape2SQL tool from SharpGIS. This is one of the best tools available to convert shapefiles to SQL tables. Afterwards, run the .exe file. When the file is run for the first time, it will ask for the database properties. Provide your database details. Select the appropriate shape files and the tool will fill up the essential details automatically. If you do not want to create the index on the column, uncheck the box beside it. The screenshot below is simply explains the procedure. You also have to be careful regarding your data, whether that is GEOMETRY or GEOGRAPHY. In this example,  it is GEOMETRY data. Click “Upload to Database”. It will show you the uploading process. Once the shape file is uploaded, close the application and open SQL Server Management Studio (SSMS). Run the following code in SSMS Query Editor. USE Spatial GO SELECT * FROM dbo.world GO This will show the complete map of world after you click on Spatial Results in Spatial Tab. In Spatial Results Set, the Zoom feature is available. From the Select label column, choose the country name in order to show the country name overlaying the country borders. Let me know if this tutorial is helpful enough. I am planning to write a few more posts about this later. Note: Please note that the images displayed here do not reflect the original political boundaries. These data are pretty old and can probably draw incorrect maps as well. I have personally spotted several parts of the map where some countries are located a little bit inaccurately. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Spatial, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • Watch YouTube in Windows 7 Media Center

    - by Mysticgeek
    Have you been looking for a way to watch your favorite viral videos from YouTube and Dailymotion from the couch? Today we take a look at an easy to use plugin which allows you to watch streaming video in Windows 7 Media Center. Install Macrotube The first thing we need to do is download and install the plugin called Macrotube (link below) following the defaults through the install wizard. After it’s installed, open Windows 7 Media Center and you’ll find Macrotube in the main menu. Currently there are three services available…YouTube, Dailymotion, and MSN Soapbox. Just select the service where you want to check out some videos. You can browse through different subjects or categories… Or you can search the the service by typing in what you’re looking for…with your remote or keyboard. There is the ability to drill down you search content by date, rating, views, and relevance. There are a few settings available such as the language beta, auto updates, and appearance. Now just kick back and browse through the different services and watch what you want from the comfort of your couch or on your computer. Conclusion This neat project is still in development and the developer is continuing to add changes through updates. It only works with Windows 7 Media Player, but there is a 32 & 64-bit version. Sometimes we experiences certain videos that wouldn’t play and it did crash a few times, but that is to be expected with a work in progress. But overall, this is a cool plugin that will allow you to watch your favorite online content from WMC. Download Macrotube and get more details and troubleshooting help fro the GreenButton forum Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Integrate Hulu Desktop and Windows Media Center in Windows 7Automatically Start Windows 7 Media Center in Live TV ModeWatch TV Programming Without a TV Tuner In Window 7 Media CenterAutomatically Mount and View ISO files in Windows 7 Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor

    Read the article

  • The Importance of a Security Assessment - by Michael Terra, Oracle

    - by Darin Pendergraft
    Today's Blog was written by Michael Terra, who was the Subject Matter Expert for the recently announced Oracle Online Security Assessment. You can take the Online Assessment here: Take the Online Assessment Over the past decade, IT Security has become a recognized and respected Business discipline.  Several factors have contributed to IT Security becoming a core business and organizational enabler including, but not limited to, increased external threats and increased regulatory pressure. Security is also viewed as a key enabler for strategic corporate activities such as mergers and acquisitions.Now, the challenge for senior security professionals is to develop an ongoing dialogue within their organizations about the importance of information security and how it can impact their organization's strategic objectives/mission. The importance of conducting regular “Security Assessments” across the IT and physical infrastructure has become increasingly important. Security standards and frameworks, such as the international standard ISO 27001, are increasingly being adopted by organizations and their business partners as proof of their security posture and “Security Assessments” are a great way to ensure a continued alignment to these frameworks.Oracle offers a number of different security assessment covering a broad range of technologies. Some of these are short engagements conducted for free with our strategic customers and partners. Others are longer term paid engagements delivered by Oracle Consulting Services or one of our partners. The goal of a security assessment, (also known as a security audit or security review), is to ensure that necessary security controls are integrated into the design and implementation of a project, application or technology.  A properly completed security assessment should provide documentation outlining any security gaps that exist in an infrastructure and the associated risks for those gaps. With that knowledge, an organization can choose to either mitigate, transfer, avoid or accept the risk. One example of an Oracle offering is a Security Readiness Assessment:The Oracle Security Readiness Assessment is a practical security architecture review focused on aligning an organization’s enterprise security architecture to their business principals and strategic objectives. The service will establish a multi-phase security architecture roadmap focused on supporting new and existing business initiatives.Offering OverviewThe Security Readiness Assessment will: Define an organization’s current security posture and provide a roadmap to a desired future state architecture by mapping  security solutions to business goals Incorporate commonly accepted security architecture concepts to streamline an organization’s security vision from strategy to implementation Define the people, process and technology implications of the desired future state architecture The objective is to deliver cohesive, best practice security architectures spanning multiple domains that are unique and specific to the context of your organization. Offering DetailsThe Oracle Security Readiness Assessment is a multi-stage process with a dedicated Oracle Security team supporting your organization.  During the course of this free engagement, the team will focus on the following: Review your current business operating model and supporting IT security structures and processes Partner with your organization to establish a future state security architecture leveraging Oracle’s reference architectures, capability maps, and best practices Provide guidance and recommendations on governance practices for the rollout and adoption of your future state security architecture Create an initial business case for the adoption of the future state security architecture If you are interested in finding out more, ask your Sales Consultant or Account Manager for details.

    Read the article

  • [MISC GEEKERY] Lucid Lynx to Come Loaded with Ubuntu One Music Store

    - by Vivek
    Ubuntu 10.04 (code name Lucid Lynx) will come loaded with the Ubuntu One music store. Rhythmbox will have the Ubuntu One music store integrated in it. It’ll also allow users to download purchased music to their local machine. Ubuntu One Music Store Users will be able to access Ubuntu One music store from the sidebar of Rhythmbox. The music store is a web page that opens in the Rhythmbox player. There are albums listed on the home page of the Ubuntu One music store page. Ubuntu One music store is powered by 7digital, which is a leading digital B2B media delivery company based in London and operating globally. Canonical, the company behind Ubuntu, has partnered with 7digital to bring the music store to it’s users, integrating it with Rhythmbox and it’s cloud storage service UbuntuOne which was launched last year. The home screen of the Ubuntu One music store displays popular albums and functionality to browse and search. You can search for Artists, Tracks, Albums, or a combination of all three. Users will also be able to browse the store alphabetically, or based on different music genres. Once you select a specific artist, all their available albums are arranged in a grid. Once an album is selected, you’ll will be able to download specific songs or the whole album. You’ll also be allowed to preview different songs for 60 seconds. You’ll be able to buy tracks using a credit card or with PayPal. The purchased tracks will be visible under Library \ Purchased from Ubuntu One. The downloaded tracks are also synced with your UbuntuOne account. This means that you’ll be able to access your tracks from any where on the web. The default UbuntuOne account comes with 2 GB free storage, however, you can also purchase additional space if you need it.   All the music is in mp3 format which is not supported by default in Ubuntu. However, you can get mp3 playback functionality using GStreamer multimedia framework. Conclusion All in all the Ubuntu One music store is a positive move to enhance the user experience and also increase the popularity of Canonical in bringing Ubuntu closer to regular users. This would also provide Canonical to make some revenue in collaboration with 7digital. Ubuntu One Music Store Wiki Similar Articles Productive Geek Tips Install GIMP 2.7.1 on Lucid Lynx using PPAExaile 0.3.0 is a Music Player for UbuntuHow to install Spotify in Ubuntu 9.10 using WineAdding extra Repositories on UbuntuSpeed Up Amarok With Large Music Collections TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Collision in Tiled Map - LibGDX

    - by user43353
    I have collision code that deals with left or right or top or bottom. I am using Tiled Map with LibGDX. Question is: How do I detect collision with other cells by all 4 sides, and not specifically by left/right or top/bottom. Here is my top/bottom and left/right collision code: private boolean isCellBlocked(float x, float y) { Cell cell = collisionLayer.getCell((int) (x / collisionLayer.getTileWidth()), (int) (y / collisionLayer.getTileHeight())); return cell != null && cell.getTile() != null && cell.getTile().getProperties().containsKey(blockedKey); } public boolean collidesRight() { for(float step = 0; step < getHeight(); step += collisionLayer.getTileHeight() / 2) if(isCellBlocked(getX() + getWidth(), getY() + step)) return true; return false; } public boolean collidesLeft() { for(float step = 0; step < getHeight(); step += collisionLayer.getTileHeight() / 2) if(isCellBlocked(getX(), getY() + step)) return true; return false; } public boolean collidesTop() { for(float step = 0; step < getWidth(); step += collisionLayer.getTileWidth() / 2) if(isCellBlocked(getX() + step, getY() + getHeight())) return true; return false; } public boolean collidesBottom() { for(float step = 0; step < getWidth(); step += collisionLayer.getTileWidth() / 2) if(isCellBlocked(getX() + step, getY())) return true; return false; } What I'm trying to achieve is simple: I'm trying to make code that will detect by all 4 sides, collidesRight + collidesLeft + collidesTop + collidesBottom in one boolean. For some reason, I cant seem to figure it out. I tried to use Rectangles (the Java Class) on the specific tile I want to be detected, but was messy and I have multiple maps. Having a Rectangle (from Java's API) around the player is no problem. It's just the tiles I want to be detected are the main issues as they cause messy code when used with the Rectangle class. Im trying to minimize the amount of code....

    Read the article

  • Welcome to my geeks blog

    - by bconlon
    Hi and welcome! I'm Bazza and this is my geeks blog. I have 20 years Visual Studio mainly C++, MFC,  ATL and now, thankfully, C# and I am embarking on the new world (well new to me) of WPF, so I thought I would try and capture my successful...and not so successful...WPF experiences with the geek world. So where to start? WPF? What I know so far... From wiki..."Windows Presentation Foundation (or WPF) is a graphical subsystem for rendering user interfaces in Windows-based applications." Hmm, great but didn't MFC, ATL (my head hurt with that one), and .Net all have APIs to allow me to code against the Windows Graphical Device Interface (GDI)? "Rather than relying on the older GDI subsystem, WPF utilizes DirectX. WPF attempts to provide a consistent programming model for building applications and provides a separation between the user interface and the business logic." OK, different drawing code, same Windows and weren't we always taught to separate our UI, Business Layer and Data Access Layer? "WPF employs XAML, a derivative of XML, to define and link various UI elements. WPF applications can be deployed as standalone desktop programs, or hosted as an embedded object in a website." Cool, now we're getting somewhere. So when they say separation they really mean separation. The crux of this appears to be that you can have creative people writing the UI and making it attractive and intuitive to use, whist the geeks concentrate on writing the Business and Data Access stuff. XAML (eXtensible Application Markup Language) maps XML elements and attributes directly to Common Language Runtime (CLR) object instances, properties and events. True separation of the View and Model. WPF also provides logical separation of a control from its appearance. In a traditional Windows system, all Controls have a base class containing a Windows handle and each Control knows how to render itself. In WPF, the controls are more like those in a Web Browser using Cascading Style Sheet, they are not wrappers for standard Windows Controls. Instead, they have a default 'template' that defines a visual theme which can easily be replaced by a custom template. But it gets better. WPF concentrates heavily on Data Binding where the client can bind directly to data on the server. I think this concept was first introduced in 'Classic' Visual Basic, where you could bind a list directly to a data from an Access database, and you could do similar in ASP .Net. However, the WPF implementation is far superior than it's predecessors. There are also other technologies that I want to look at like LINQ and the Entity Framework, but that's all for now. #

    Read the article

  • Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff

    - by The Geek
    Yesterday Microsoft announced the release candidate of Internet Explorer 9, which is very close to the final product. Here’s a screenshot tour of the most interesting new stuff, as well as answers to your questions. The most important question is should you install this version? And the answer is absolutely yes. Even if you don’t use IE, it’s better to have a newer, more secure version on your PC. What’s New Under the Hood in Release Candidate vs Beta? If you want to see the full list of changes with all the original marketing detail, you can read Microsoft’s Beauty of the Web page, but here’s the highlights that you might be interested in. Improved Performance – they’ve made a lot of changes, and it really feels faster, especially when using more intensive web apps like Gmail. Power Consumption Settings – since the JavaScript engine in any browser uses a lot of CPU power, they’ve now integrated it into the power settings, so if you’re on battery it will use less CPU, and save battery life. This is really a great change. UI Changes – The tab bar can now be moved below the address bar (see below for more), they’ve shaved some pixels off the design to save space, and now you can toggle the Menu bar to be always on. Pinned Sites – now you can pin multiple pages to a single taskbar button. Very useful if you always use a couple web apps together. You can also pin a site in InPrivate mode. FlashBlock and AdBlock are Integrated (sorta) – there’s a new ActiveX filtering that lets you enable plug-ins only for sites you trust. There’s also a tracking protection list that can block certain content (which can obviously be used to block ads). Geolocation – while a lot of privacy conscious people might complain about this, if you use your laptop while traveling, it’s really useful to have geo-located features when using Google Maps, etc. Don’t worry, it won’t leak your privacy by default. WebM Video – Yeah, Google recently removed H.264 from Chrome, but Microsoft has added Google’s WebM video format to Internet Explorer. Keep reading for more about using the new features Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines The 50 Faces of Mario Death [Infographic] Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO]

    Read the article

  • First impressions of Scala

    - by Scott Weinstein
    I have an idea that it may be possible to predict build success/failure based on commit data. Why Scala? It’s a JVM language, has lots of powerful type features, and it has a linear algebra library which I’ll need later. Project definition and build Neither maven or the scala build tool (sbt) are completely satisfactory. This maven **archetype** (what .Net folks would call a VS project template) mvn archetype:generate `-DarchetypeGroupId=org.scala-tools.archetypes `-DarchetypeArtifactId=scala-archetype-simple `-DremoteRepositories=http://scala-tools.org/repo-releases `-DgroupId=org.SW -DartifactId=BuildBreakPredictor gets you started right away with “hello world” code, unit tests demonstrating a number of different testing approaches, and even a ready made `.gitignore` file - nice! But the Scala version is behind at v2.8, and more seriously, compiling and testing was painfully slow. So much that a rapid edit – test – edit cycle was not practical. So Lab49 colleague Steve Levine tells me that I can either adjust my pom to use fsc – the fast scala compiler, or use sbt. Sbt has some nice features It’s fast – it uses fsc by default It has a continuous mode, so  `> ~test` will compile and run your unit test each time you save a file It’s can consume (and produce) Maven 2 dependencies the build definition file can be much shorter than the equivalent pom (about 1/5 the size, as repos and dependencies can be declared on a single line) And some real limitations Limited support for 3rd party integration – for instance out of the box, TeamCity doesn’t speak sbt, nor does IntelliJ IDEA Steeper learning curve for build steps outside the default Side note: If a language has a fast compiler, why keep the slow compiler around? Even worse, why make it the default? I choose sbt, for the faster development speed it offers. Syntax Scala APIs really like to use punctuation – sometimes this works well, as in the following map1 |+| map2 The `|+|` defines a merge operator which does addition on the `values` of the maps. It’s less useful here: http(baseUrl / url >- parseJson[BuildStatus] sure you can probably guess what `>-` does from the context, but how about `>~` or `>+`? Language features I’m still learning, so not much to say just yet. However case classes are quite usefull, implicits scare me, and type constructors have lots of power. Community A number of projects, such as https://github.com/scalala and https://github.com/scalaz/scalaz are split between github and google code – github for the src, and google code for the docs. Not sure I understand the motivation here.

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Windows Azure Learning Plan - Architecture

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with what an Architect needs to know about Windows Azure.   General Architectural Guidance Overview and general  information about Azure - what it is, how it works, and where you can learn more. Cloud Computing, A Crash Course for Architects (Video) http://www.msteched.com/2010/Europe/ARC202 Patterns and Practices for Cloud Development http://msdn.microsoft.com/en-us/library/ff898430.aspx Design Patterns, Anti-Patterns and Windows Azure http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/27/design-patterns-anti-patterns-and-windows-azure.aspx Application Patterns for the Cloud http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx Architecting Applications for High Scalability (Video) http://www.msteched.com/2010/Europe/ARC309 David Aiken on Azure Architecture Patterns (Video) http://blogs.msdn.com/b/architectsrule/archive/2010/09/09/arcast-tv-david-aiken-on-azure-architecture-patterns.aspx Cloud Application Architecture Patterns (Video) http://blogs.msdn.com/b/bobfamiliar/archive/2010/10/19/cloud-application-architecture-patterns-by-david-platt.aspx 10 Things Every Architect Needs to Know about Windows Azure http://geekswithblogs.net/iupdateable/archive/2010/10/20/slides-and-links-for-windows-azure-platform-session-at-software.aspx Key Differences Between Public and Private Clouds http://blogs.msdn.com/b/kadriu/archive/2010/10/24/key-differences-between-public-and-private-clouds.aspx Microsoft Application Platform at a Glance http://blogs.msdn.com/b/jmeier/archive/2010/10/30/microsoft-application-platform-at-a-glance.aspx Windows Azure is not just about Roles http://vikassahni.wordpress.com/2010/11/17/windows-azure-is-not-just-about-roles/ Example Application for Windows Azure http://msdn.microsoft.com/en-us/library/ff966482.aspx Implementation Guidance Practical applications for the architect to consider 5 Enterprise steps for adopting a Platform as a Service http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0 Performance-Based Scaling in Windows Azure http://msdn.microsoft.com/en-us/magazine/gg232759.aspx Windows Azure Guidance for the Development Process http://blogs.msdn.com/b/eugeniop/archive/2010/04/01/windows-azure-guidance-development-process.aspx Microsoft Developer Guidance Maps http://blogs.msdn.com/b/jmeier/archive/2010/10/04/developer-guidance-ia-at-a-glance.aspx How to Build a Hybrid On-Premise/In Cloud Application http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx A Common Scenario of Multi-instances in Windows Azure http://blogs.msdn.com/b/windows-azure-support/archive/2010/11/03/a-common-scenario-of-multi_2d00_instances-in-windows-azure-.aspx Slides and Links for Windows Azure Platform Best Practices http://geekswithblogs.net/iupdateable/archive/2010/09/29/slides-and-links-for-windows-azure-platform-best-practices-for.aspx AppFabric Architecture and Deployment Topologies guide http://blogs.msdn.com/b/appfabriccat/archive/2010/09/10/appfabric-architecture-and-deployment-topologies-guide-now-available-via-microsoft-download-center.aspx Windows Azure Platform Appliance http://www.microsoft.com/windowsazure/appliance/ Integrating Cloud Technologies into Your Organization Interoperability with Open Source and other applications; business and cost decisions Interoperability Labs at Microsoft http://www.interoperabilitybridges.com/ Windows Azure Service Level Agreements http://www.microsoft.com/windowsazure/sla/

    Read the article

  • Inverted schedctl usage in the JVM

    - by Dave
    The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory - the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying -- threads piling up on a critical section behind a preempted lock-holder -- and other lock-related performance pathologies. If you're interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure -- the schedctl block -- that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a "preemption pending" flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can't abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading : US05937187 and various papers by Andy Tucker. We'll now switch topics to the implementation of the "synchronized" locking construct in the HotSpot JVM. If a lock is contended then on multiprocessor systems we'll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were "free" then we'd never spin to avoid switching, but that's not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we'll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I've found it useful to enable schedctl to request deferral while spinning. But while spinning I've arranged for the code to periodically check or poll the "preemption pending" flag. If that's found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time.

    Read the article

  • Convert DVDs and ISO Files to MKV with MakeMKV

    - by DigitalGeekery
    Looking for a quick and easy way to convert your DVDs or ISOs to MKV files? Today we take a look at the MakeMKV Beta which gets the job done very well. Installing and Using MakeMKV Download and install MakeMKV (See download link below) If converting a DVD, place it into your optical drive. When you open MakeMKV you will be greeted by it’s minimalistic interface. Click on the DVD to hard drive button to open the DVD, or the folder icon on the top menu to browse for an ISO file.   MakeMKV will open the disc or file. Once the disc or file is opened, you’ll see the titles listed in the window on the left. Double-click on the titles to expand the tree structure.   Remove any title or tracks you don’t want to convert by unselecting the check box to the left. On the right side of the window, click the folder icon to select browse for your file output directory. When ready, click the MakeMkv button to begin the conversion process.   Conversion will proceed.   When the conversion is finished. Click OK. That’s all there is to it! Your MKV file is ready to play. Conclusion MakeMKV is currently still in beta and during the beta phase it will rip both DVD and Blu-ray for free. However, the DVD ripping functionality will always remain free. After 30 days if you want to continue ripping Blu-ray discs, you’ll need to purchase a license. DVD rips are very quick…typically around 15-20 minutes depending on the length of the movie. MakeMKV is available for Windows, Mac, Linux and will rip and convert DVDs to MKV files. Not all media players natively support MKV playback, so if you’re having trouble playing MKV files, try downloading VLC Media player, or the latest version of the DivX codec. Download MakeMKV Similar Articles Productive Geek Tips How To Rip DVDs with VLCEasily Change Audio File Formats with XRECODEHow To Convert Video Files to MP3 with VLCConvert PDF Files to Word Documents and Other FormatsConvert DVD to MP4 / H.264 with HD Decrypter and Handbrake TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • What do you do when you encounter an idiotic interview question?

    - by Senthil
    I was interviewing with a "too proud of my java skills"-looking person. He asked me "What is your knowledge on Java IO classes.. say.. hash maps?" He asked me to write a piece of java code on paper - instantiate a class and call one of the instance's methods. When I was done, he said my program wouldn't run. After 5 minutes of serious thinking, I gave up and asked why. He said I didn't write a main function so it wouldn't run. ON PAPER. [I am too furious to continue with the stupidity...] Believe me it wasn't trick questions or a psychic or anger management evaluation thing. I can tell from his face, he was proud of these questions. That "developer" was supposed to "judge" the candidates. I can think of several things: Hit him with a chair (which I so desperately wanted to) and walk out. Simply walk out. Ridicule him saying he didn't make sense. Politely let him know that he didn't make sense and go on to try and answer the questions. Don't tell him anything, but simply go on to try and answer the questions. So far, I have tried just 4 and 5. It hasn't helped. Unfortunately many candidates seem to do the same and remain polite but this lets these kind of "developers" just keep ascending up the corporate ladder, gradually getting the capacity to pi** off more and more people. How do you handle these interviewers without bursting your veins? What is the proper way to handle this, yet maintain your reputation if other potential employers were to ever get to know what happened here? Is there anything you can do or should you even try to fix this? P.S. Let me admit that my anger has been amplified many times by the facts: He was smiling like you wouldn't believe. I got so many (20 or so) calls from that company the day before, asking me to come to the interview, that I couldn't do any work that day. I wasted a paid day off.

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • ArchBeat Link-o-Rama for July 3, 2013

    - by Bob Rhubart
    Industrial SOA Chapter 5: Enterprise Service Bus Enterprise Service Bus, the fifth and latest addition to the Industrial SOA article series, answers some of the most important questions surrounding the use of an ESB. Industrial SOA Chapter 4: SOA Maturity The fourth article in the Industrial SOA series, SOA Maturity offers "an exploration of the fundamentals of applying a factory approach to modern service-oriented software development." Using the Exalytics Summary Advisor and Oracle BI Apps 7.9.6.4 | Mark Rittman Oracle ACE Director Mark Rittman's post revisits "the use of the Summary Advisor, with my BI Apps installation bumped-up to version 7.9.6.4, and the Exalytics environment patched up to 11.1.1.6.9, the latest patch release we’ve applied to that environment." Part 1 - 12c Database and WLS - Overview | Steve Felts Steve Felts shares a handy table that "maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases." Developers WebCast: Deploy Highly-Available Custom Services on Your Data Grid Products - July 11 Oracle Coherence Sr. Architect Brian Oliver hosts this free July 11 webcast for developers to show you how to "create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences." A checklist for OIM go live | Daniel Gralewski FMW A-Team solution architect Daniel Gralewski's list is intended to complement Oracle Identity Manager. His post "provides tips on a few topics that are not part of the documentation." How Many ODI Master Repositories Should We Have? | Christophe Dupupet FMW solution architect Christophe Dupupet provides a simple along with best practices for the architecture of ODI repositories in a corporate environment. Distinguish EA from enterprise wide solution architecture | John Wu My buddy Tony Meyer, who did a great presentation recently at the Cleveland-area Enterprise Architect / Solution Architect Meet-up, recommends this Toolbox article by John Wu. YouTube: Oracle Fusion Applications Developer Tips If you work with Fusion Applications you'll want to check out the tips and tricks for building extensions, customizations, and integrations now available on the new Oracle Fusion Middleware Developer Relations YouTube channel. The CX Factor: Wooing and wowing customers in the digital age "There was a time when 'customer experience' was limited to what happened to you when you walked into a store, restaurant, or other place of business or when you called a business on the telephone. But that was back when you could still smoke on airplanes." Thought for the Day "If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be 'meetings.' " — Dave Barry (Born July 3, 1947) Source: brainyquote.com

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • How To Disable Individual Plug-ins in Google Chrome

    - by The Geek
    Have you ever wondered how to disable useless or insecure browser plug-ins in Google Chrome? Here’s the lowdown on how to get rid of Java, Acrobat, Silverlight, and the rest of the plugins you probably want to get rid of. Disabling Plugins in Google Chrome If you head to about:plugins in your address bar, you’ll probably see a list of plugins, but won’t be able to disable them yet. What you’ll need to do is switch over to the Dev channel of Chrome, which gives you access to all the latest features—though you might be warned that sometimes the dev channel might be less stable than the release or beta channels. Ready to proceed? Head to the Dev Channel page, and then click the link to run the installer. You’ll be prompted to restart Chrome when you’re done. Note that Mac and Windows users can both run an installer to switch. Linux users will have to install a package. Note: Once you’ve switched to the Dev channel, you can’t really switch to the stable channel. You’ll have to uninstall Chrome and then reinstall the regular version. Now that you’ve switched to the dev channel and restarted your browser, head to about:plugins in the address bar, and then just disable each plugin you really don’t need. Plugins you can generally live without?  Java, Acrobat, Microsoft Office, Windows Presentation Foundation, Silverlight. These will be on a case-by-case basis, of course, but the vast majority of large websites don’t require any of those. When it comes right down to it, the only plugin that most people require is Flash… and leave the “Default Plug-in” alone too. Special thanks to @jordanconway for pointing out the solution. Similar Articles Productive Geek Tips Disable YouTube Comments while using ChromeHow to Make Google Chrome Your Default BrowserSubscribe to RSS Feeds in Chrome with a Single ClickAdd Notes to Google Notebook from ChromeAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • How to implement efficient Fog of War?

    - by Cambrano
    I've asked a question how to implement Fog Of War(FOW) with shaders. Well I've got this working. I use the vertex color to identify the alpha of a single vertex. I guess the most of you know what the FOW of Age of Empires was like, anyway I'll shortly explain it: You have a map. Everything is unexplored(solid black / 100% transparency) at the beginning. When your NPC's / other game units explore the world (by moving around mostly) they unshadow the map. That means. Everything in a specific radius (viewrange) around a NPC is visible (0%transparency). Anything that is out of viewrange but already explored is visible but shadowed (50% transparency). So yeah, AoE had relatively huge maps. Requirements was something around 100mhz etc. So it should be relatively easy to implement something to solve this problem - actually. Okay. I'm currently adding planes above my world and set the color per vertex. Why do I use many planes ? Unity has a vertex limit of 65.000 per mesh. According to the size of my tiles and the size of my map I need more than one plane. So I actually need a lot of planes. This is obviously pita for my FPS. Well so my question is, what are simple (in sense of performance) techniques to implement a FOW shader? Okay some simplified code what I'm doin so far: // Setup for (int x = 0; x < (Map.Dimension/planeSize); x++) { for (int z = 0; z < (Map.Dimension/planeSize); z++) { CreateMeshAt(x*planeSize, 3, z*planeSize) } } // Explore (is called from NPCs when walking for example) for (int x = ((int) from.x - radius); x < from.x + radius; x ++) { for (int z = ((int) from.z - radius); z < from.z + radius; z ++) { if (from.Distance(x, 1, z) > radius) continue; _transparency[x/tileSize, z/tileSize] = 0.5f; } } // Update foreach(GameObject plane in planes){ foreach(Vector3 vertex in vertices){ Vector3 worldPos = GetWorldPos(vertex); vertex.Color = new Color(0,0,0, _transparency[worldPos.x/tileSize, worldPos.z/tileSize]); } } My shader just sets the transparency of the vertex now, which comes from the vertex color channel

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >