Search Results

Search found 5987 results on 240 pages for 'nested sets'.

Page 127/240 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

  • Dependency Analyser

    - by tsutha
    I have been watching 3 videos put together by SSIS team on Dependency Analyser. I am happy Microsoft have made an effort to do something about it. I have been asking for this feature since SQL 2005 TAP. Still a long way to go before they catch up with competetion. It looks like it currently supports SSIS and SQL dependencies. Release note states its still in development and support only limited sets of tasks. You still would not know the impact of dropping a column on cubes, reports etc. I hope that changes by the time RTM comes out. I am struggling to understand why it is impossible to do it across the solution. Ideally if you have a BI solution which holds DB, SSIS, SSAS, SSRS & PPS projects I would like to right click and execute a dependency analyser, stating what impact would have if I drop / rename a specific column. Has anyone else looked at it and what are your thoughts? ThanksSutha  

    Read the article

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Comparing Dates in Oracle Business Rule Decision Tables

    - by James Taylor
    I have been working with decision tables for some time but have never had a scenario where I need to compare dates. The use case was to check if a persons membership had expired. I didn't think much of it till I started to develop it. The first trap I feel into was trying to create ranges and bucket sets. The other trap I fell into was not converting the date field to a complete date. This may seem obvious to most people but my Google searches came up with nothing so I thought I would create a quick post. I assume everyone knows how to create a decision table so I'm not going to go through those steps. The prerequisite for this post is to have a decision table with a payload that has a date field. This filed must have the date in the following format YYYY-MM-DDThh:mm:ss. Create a new condition in your decision table Right-click on the condition to edit it and select the expression builder In the expression builder, select the Functions tab. Expand the CurrentDate file and select date, and click Insert Into Expression button. In the Expression Builder you need to create an expression that will return true or false, add the operation <= after the CurrentDate.date In my scenario my date field is memberExpire, Navigate to your date field and expand, select toGregorianCalendar(). Your expression will look something like this, click OK to get back to the decision table Now its just a matter of checking if the value is true or false. Simple when you know how :-)

    Read the article

  • HTG Explains: The Best and Worst Ways to Send a Resume

    - by Eric Z Goodnight
    With so many people looking for jobs, the slightest edge in your resume presentation has potential to make or break your chances. But not all filetypes or methods are created equal—read on to see the potential pitfalls your resume faces. In this article, we’ll explore what can go wrong in a resume submission, what can be done to counteract it, and also go into why a prospective employer might ignore your resume based on your method of sending a resume. Finally, we’ll cover the best filetypes and methods that can help get you that new job you’ve been looking for. What Sets Your Resume Apart? Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup] Get the Old Microsoft Paint UI Back in Windows 7 Relax and Sleep Is a Soothing Sleep Timer Google Rolls Out Two-Factor Authentication

    Read the article

  • Oracle Traffic Director – download and check out new cool features in 11.1.1.7.0 by Frances Zhao

    - by JuergenKress
    As Oracle's strategic layer-7 software load balancer product, Oracle Traffic Direct is fast, reliable, secure, easy-to-use and scalable; that you can deploy as the reliable entry point for all TCP, HTTP and HTTPS traffic to application servers and web servers in your network. The latest release Oracle Traffic Director 11.1.1.7.0 is available for ExaLogic and Database Appliance! For download and details please visit the Traffic Director OTN website. It this release, we have introduced some major new functionality and improvements. Web application firewall. Oracle Traffic Director supports web application firewalls. A web application firewall (WAF) is a filter or server plugin that applies a set of rules, called rule sets, to an HTTP request. Using a web application firewall, users can inspect traffic and deny requests to protect back-end applications from CSRF vulnerabilities and common attacks such as cross-site scripting. WebSocket Connections. Oracle Traffic Director handles WebSocket connections by default. WebSocket connections are long-lived and allow support for live content, games in real-time, video chatting, and so on. Support for LDAP/T3 Load Balancing. Oracle Traffic Director now supports basic LDAP/T3 load balancing at layer 7, where requests are handled as generic TCP connections for traffic tunneling. It works in full-NAT mode. Please download and try it out. For more information, check out the data sheet and the documentation. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: traffic director,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Book: DevOps for Developers

    - by Tori Wieldt
    We all know development and operations often act like silos, with "Just throw it over the wall!" being the battle cry. Many organizations unwittingly contribute to gaps between teams, with management by (competing) objectives; a clash of Agile practices vs. more conservative approaches; and teams using different sets of tools, such as Nginx, OpenEJB, and Windows on developers' machines and Apache, Glassfish, and Linux on production machines. At best, you've got sub-optimal collaboration, at worst, you've got the Hatfields and the McCoys.  The book DevOps for Developers helps bridge the gap between development and operations by aligning incentives and sharing approaches for processes and tools. It introduces DevOps as a modern way of bringing development and operations together. It also means to broaden the usage of Agile practices to operations to foster collaboration and streamline the entire software delivery process in a holistic way. Some single aspects of DevOps may not be new, for example, you may have used the tool Puppet for years already, but with a new mindset ("my job is not just to code, it's to serve the customer in the best way possible") and a complete set of recipes, you'll be well on your way to success. DevOps for Developers also by provides real-world use cases (e.g., how to use Kanban or how to release software). It provides a way to be successful in the real development/operations world. DevOps for Developers is written my Michael Hutterman, Java Champion, and founder of the Cologne Java User Group. "With DevOps for Developers, developers can learn to apply patterns to improve collaboration between development and operations as well as recipes for processes and tools to streamline the delivery process," Hutterman explains.

    Read the article

  • Don't Miss the Social Engagement Center -- See How Social Cloud Tools Can Work for You

    - by Oracle OpenWorld Blog Team
    Are you ready to get social at Oracle OpenWorld? Stop by the Oracle Social Engagement Center in Moscone South Upper Lobby (near the South Meetup location) and see Oracle Cloud Social Services in action. Ask Oracle's social experts how they're using next-generation enterprise social tools to deliver extreme engagement. Watch in near real-time as Oracle reaches out to inform, inspire, and engage global communities. We're showing: -     Collective Intellect for specific data sets on 2 large screens-     Vitrue analytics and Vitrue publishing on 2 large screens-     Relative Twitter activity across the hash tags #OOW, #OOW12, #openworld, #oracle, and accounts @oracle, and @openworld on 1 large screenPlus we have 5 computers where we're actively working with the Collective Intellect and Vitrue technologies, so you can how they function. So come visit the Social Engagement Center to learn how Oracle is using and engaging with these tools.  And don't forget the Social Plaza @ OpenWorld event on Tuesday from noon - 8:00 p.m. Join us for food, drink, the afternoon keynote, and some cool libations on a hot afternoon.

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • Oracle Fusion Applications Design Patterns Now Available

    - by Frank Nimphius
    "The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too.These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when [...]  designing exciting, new, highly usable applications -- in the cloud or on-premise.  Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business.  What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let us know in the comments section or use the contact form provided."

    Read the article

  • AMD FX CPU drivers/patch

    - by Mubeen Shahid
    I am using Ubuntu since 2009 on notebooks with Intel CPUs. However now, that I am using AMD's FX 6300, I am interested in knowing if there exists anything from Ubuntu (specifically any kernel enhancements/drivers/patches) for AMD's FX "family 15h" Piledrivers. Reason: I would like to have a kernel which uses the hardware to its full capacity, be able to use the latest instruction sets, for max. performance. I did some tests, started with compiling stable 3.9.7 on my 12.04 LTS box, and during compilation I choose processor vendor AMD (unchecked Intel/VIA/etc.), and when I started Ubuntu with this compiled kernel, in the section "System Settings - Additional Drivers" I found that, in addition to graphic card's drivers, there were AMD family 15h drivers also. However, I would prefer something in this regard tested/signed by Ubuntu developers. P.S: 1- the kernel that I have compiled has some issues with Nvidia graphics drivers, so I deleted kernel 3.9.7 and installed signed 3.8.xx from Ubuntu repositories. 2- incase if somebody is planning to advise me to install "AMD64", I am not talking about AMD64 (which is in fact for 64-bit platform).

    Read the article

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • The Ideal Platform for Oracle Database 12c In-Memory and in-memory Applications

    - by Michael Palmeter (Engineered Systems Product Management)
    Oracle SuperCluster, Oracle's SPARC M6 and T5 servers, Oracle Solaris, Oracle VM Server for SPARC, and Oracle Enterprise Manager have been co-engineered with Oracle Database and Oracle applications to provide maximum In-Memory performance, scalability, efficiency and reliability for the most critical and demanding enterprise deployments. The In-Memory option for the Oracle Database 12c, which has just been released, has been specifically optimized for SPARC servers running Oracle Solaris. The unique combination of Oracle's M6 32 Terabytes Big Memory Machine and Oracle Database 12c In-Memory demonstrates 2X increase in OLTP performance and 100X increase in analytics response times, allowing complex analysis of incredibly large data sets at the speed of thought. Numerous unique enhancements, including the large cache on the SPARC M6 processor, massive 32 TB of memory, uniform memory access architecture, Oracle Solaris high-performance kernel, and Oracle Database SGA optimization, result in orders of magnitude better transaction processing speeds across a range of in-memory workloads. Oracle Database 12c In-Memory The Power of Oracle SuperCluster and In-Memory Applications (Video, 3:13) Oracle’s In-Memory applications Oracle E-Business Suite In-Memory Cost Management on the Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Applications on Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Sales Advisor on the SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One Project Portfolio Management on the SuperCluster M6-32 (PDF)

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • How to create an Orthographic display in OpenGL (ES) that handles different screen sizes and orientations?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision

    - by JiminyCricket
    I've been fooling around with moving on sloped tiles in XNA and it is semi-working but not completely satisfactory. I also have been thinking that having sets of predetermined slopes might not give me terrain that looks "organic" enough. There is also the problem of having to construct several different types of tile for each slope when they're chained together (only 45 degree tiles will chain perfectly as I understand it). I had thought of somehow scanning for connected chains of sloped tiles and treating it as a new large triangle, as I was having trouble with glitching at the edges where sloped tiles connect. But, this leads back to the problem of limiting the curvature of the terrain. So...what I'd like to do now is create a simple image or texture of the terrain of a level (or section of the level) and generate a simple heightmap (of the Y's for each X) for the terrain. The player's Y position would then just be updated based on their X position. Is there a simple way of doing this (or a better way of solving this problem)? The main problem I can see with this method is the case where there are areas above the ground that can be walked on. Maybe there is a way to just map all walkable ground areas? I've been looking at this helpful bit of code: http://thirdpartyninjas.com/blog/2010/07/28/sloped-platform-collision/ but need a way to generate the actual points/vectors.

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • Program instantly closing [migrated]

    - by Ben Clayton
    I made this program and when I compiled it there were no errors but the program just instantly closed, any answers would be appreciated. #include <iostream> //Main commands #include <string> // String commands #include <windows.h> // Sleep using namespace std; int main () { //Declaring variables float a; bool end; std::string input; end = false; // Making sure program doesn't end instantly cout << "Enter start then the number you want to count down from." << ".\n"; while (end = false){ cin >> input; cout << ".\n"; if (input.find("end") != std::string::npos) // Ends the program if user types end end = true; else if (input.find("start" || /* || is or operator*/ "restart") != std::string::npos) // Sets up the countdown timer if the user types start { cin >> a; cout << ".\n"; while (a>0){ Sleep(100); a = a - 0.1; cout << a << ".\n"; } cout << "Finished! Enter restart and then another number, or enter end to close the program" << ".\n"; } else // Tells user to start program cout << "Enter start"; } return 0; // Ends program when (end = true) }

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >