Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 316/441 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Simple collision detection in Unity 2D

    - by N1ghtshade3
    I realise other posts exist with this topic yet none have gone into enough detail for me. I am attempting to create a 2D game in Unity using C# as my scripting language. Basically I have two objects, player and bomb. Both were created simply by dragging the respective PNG to the stage. I have set up touch controls to move player left and right; gravity of any kind is not needed as I only require it to move x units when I tap either the left or right side of the screen. This movement is stored in a script called playerController.cs and works just fine. I also have a variable health = 3 for player, which is stored in healthScript.cs. I am now at a point where I am stuck. I would like it so that when player collides with bomb, health decreases by one and the bomb object is destroyed. So what I tried doing is using a new script called playerPhysics.cs, I added the following: void OnCollisionEnter2D(Collision2D coll){ if(coll.gameObject.name=="bomb") GameObject.Destroy("bomb"); healthScript.health -= 1; } While I'm fairly sure I don't know the proper way to reference a variable in another script and that's why the health didn't decrease when I collided, bomb never disappeared from the stage so I'm thinking there's also a problem with my collision. Initially, I had simply attached playerPhysics.cs to player. After searching around though, it appeared as though player also needed a rigidBody attached to it, so I did that. Still no luck. I tried using a circleCollider (player is a circle), using a rigidBody2D, and using all manner of colliders on one and/or both of the objects. If you could please explain what colliders (if any) should be attached to which objects and whether I need to change my script(s), that would be much more helpful than pointing me to one of the generic documentation examples I've already read. Also, if it would be simple to fix the health thing not working that would be an added bonus but not exactly the focus of this question. Bear in mind that this game is 2D; I'm not sure if that changes anything. Thanks!

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Business School graduate joins Oracle

    - by jessica.ebbelaar(at)oracle.com
    My name is Mathias, I work as an Applications Inside Sales Rep for the French market, and I’d like to give you a brief snapshot of my experience at Oracle. First things first, how did you hear about Oracle? Where have you seen the sharp and recognizable red logo? Was it in Charles de Gaulle Airport when your eyes crossed the 20-metre banner with a picture of a strange big machine in the middle? Was it through reading the Forbes 10 top IT companies worldwide ranking? Or is it because IT is your thing and you cannot but know one of the “big four”? Meeting with a Grenoble Alumnus My story is a little different. My plan was to work in sales, in the IT industry. I had heard about Oracle, but my opinion at the time was that this kind of multinational company was way out of reach for a young graduate, even with high enthusiasm and great excitement to be (finally) on the job market. So, I was really surprised when I had an interesting conversation with a top alumnus of my business school. We were at the Grenoble Ecole de Management graduation ceremony (our graduation!), and before the party got really started, I got to chat with her. She told me of the great experience she was getting by living and working in Dublin. She had already figured it all out: “you work with another 100 young people from 10 different nationalities across Europe, you can be based in Dublin, but then once you work really hard you can move to Malaga Spain or other BUs around the world, you can work with different lines of business and learn about new “techy” and business oriented products, move to the field in your home country or elsewhere, etc.” What, what, what? Moving around Europe, trained by the best sales coaches in the world, acquiring strong IT knowledge and getting on board with one of fastest-growing and most watched companies in the world? Well, I was in. The next day (OK, 3 days after, the time to recover), I sent her my CV, and 3 months later I started as a Business Development Consultant at Oracle in Dublin, representing the latest cloud based CRM across the French market. That was 15 months ago. Since then, I moved line of business twice, I’m always learning new things and working with different and senior stakeholders; I have attended hundreds of hours of sales and product training (priceless when you come from a business background); I passed the Dublin Institute of Technology Sales Certification through different trainings given onsite within Oracle; I’ve led projects based around social media and I’ve gotten involved within various sales deals going on my market. Despite all of these great things, two will remain in my spirit: the multiculturalism that I experience every day in the office, and the American style of management - more direct and open than what you can find in “regular French companies”. Sales Progression Board In May 2012, I passed what we call a ‘Sales Progression Board’ to be promoted to an Inside Sales position. I am now in charge of generating revenue through the sale of Oracle applications on my specific territory. Always keeping in my mind my personal ambition: going to the field one day. Interested to join Oracle in the same role as Mathias? Visit http://campus.oracle.com.

    Read the article

  • ASP.NET Web Forms is bad, or what am I missing?

    - by iveqy
    Being a PHP guy myself I recently had to write a spider to an asp.net site. I was really surprised by the different approach to ajax and form-handling. For example, in the PHP sites I've worked with, a deletion of a database entry would be something like: GET delete.php?id=&confirm=yes and get a "success" back in some form (in the ajax case, probably a json reply). In this asp.net application you would instead post a form, including all inputs on the page, with a huge __VIEWSTATE and __EVENTVALIDATION. This would be more than 10 times as big as above. The reply would be the complete side again, with a footer containing some structured data for javascript to parse and display the result. Again, the whole page is sent, and then throwed away(?) since it's already displayed. Why not just send the footer with the data to parse (it's not json nor xml but a | separated list). I really can't see why you would design a system that way. Usually you've a fast client, and a somewhat fast server but a really slow connection. Why not keep the datatransfer to a minimum? Why those huge __VIEWSTATE and __EVENTVALIDATION? It seems that everything is done way to chatty and way to complicated. I really can't see the point and that usually means that I'm missing something. So please tell me, what are the reasons for this design and what benefits (and weaknesses) does it have? (Yes I know that __VIEWSTATE is used to tell what type of form-konfiguration should be sent back to the server. But WHY is this needed?) Please keep this discussion strictly technical and avoid flamewars. Update: Please excuse the somewhat rantish question. I tried to explain my view to be able to get a better answer. I am not saying that asp.net is bad, I am saying that I don't understand the meaning of those concepts. Usually that means that I've things to learn instead of the concepts beeing wrong. I appreciate the explanations about that "you don't have to do this way in asp.net", I'll read up on MVC and other .net technologies. However, there most be a reason for this site (the one I referred to) to be written the way it is. It's written by professionals for a big organisation with far more experience than what I've. Any explanation about their (possible) design choice would be welcome.

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Google search results are invalid

    - by Rufus
    I'm writing a program that lets a user perform a Google search. When the result comes back, all of the links in the search results are links not to other sites but to Google, and if the user clicks on one, the page is fetched not from the other site but from Google. Can anyone explain how to fix this problem? My Google URL consists of this: http://google.com/search?q=gargle But this is what I get back when the user clicks on the Wikipedia search result, which was http://www.google.com/url?q=http://en.wikipedia.org/wiki/Gargling&sa=U&ei=_4vkT5y555Wh6gGBeOzECg&ved=0CBMQejAe&usg=AFQjeNHd1eRV8Xef3LGeH6AvGxt-AF-Yjw <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" dir="ltr" class="client-nojs" xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Gargling - Wikipedia, the free encyclopedia</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Style-Type" content="text/css" /> <meta name="generator" content="MediaWiki 1.20wmf5" /> <meta http-equiv="last-modified" content="Fri, 09 Mar 2012 12:34:19 +0000" /> <meta name="last-modified-timestamp" content="1331296459" /> <meta name="last-modified-range" content="0" /> <link rel="alternate" type="application/x-wiki" title="Edit this page" > <link rel="edit" title="Edit this page" > <link rel="apple-touch-icon" > <link rel="shortcut icon" > <link rel="search" type="application/opensearchdescription+xml" > <link rel="EditURI" type="application/rsd+xml" > <link rel="copyright" > <link rel="alternate" type="application/atom+xml" title="Wikipedia Atom feed" > <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=ext.gadget.teahouse%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%2Cshared%7Cskins.vector&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">#mwe-lastmodified { display: none; }</style><meta name="ResourceLoaderDynamicStyles" content="" /> <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=site&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">a:lang(ar),a:lang(ckb),a:lang(fa),a:lang(kk-arab),a:lang(mzn),a:lang(ps),a:lang(ur){text-decoration:none} /* cache key: enwiki:resourceloader:filter:minify-css:7:d5a1bf6cbd05fc6cc2705e47f52062dc */</style>

    Read the article

  • LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} LUKOIL Overseas Group is a growing oil and gas company that is an integral part of the vertically integrated oil company OAO LUKOIL. It is engaged in the exploration, acquisition, integration, and efficient development of oil and gas fields outside the Russian Federation to promote transforming LUKOIL into a transnational energy company. In 2010, the company signed a 20-year development project for the giant, West Qurna 2 oil field in Iraq. Executing 10,000 to 15,000 project activities simultaneously on 14 major construction and drilling projects in Iraq for the West Qurna-2 project meant the company needed a clear picture, in real time, of dependencies between its capital construction, geologic exploration and sinking projects—required for its building infrastructure oil field development projects in Iraq. LUKOIL Overseas Holding deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to generate structured project management information and optimize planning, monitoring, and analysis of all engineering and commercial activities—such as tenders, and bulk procurement of materials and equipment—related to oil field development projects. A word from LUKOIL Overseas Holding Ltd. “Previously, we created project schedules on desktop computers and uploaded them to the project server to be merged into one big file for each project participant to access. This was not scalable, as we’ve grown and now run up to 15,000 activities in numerous projects and subprojects at any time. With Oracle’s Primavera P6 Enterprise Project Portfolio Management, we can now work concurrently on projects with many team members, enjoy absolute security, and issue new baselines for all projects and project participants once a week, with ease.” – Sergey Kotov, Head of IT and the Communication Office, LUKOIL Mid-East Ltd. Oracle Primavera Solutions: · Facilitated managing dependencies between projects by enabling the general scheduler to reschedule all projects and subprojects once a week, realigning 10,000 to 15,000 project activities that the company runs at any time · Replaced Microsoft Project and a paper-based system with a complete solution that provides structured project data · Enhanced data security by establishing project management security policies that enable only authorized project members to edit their project tasks, while enabling each project participant to view all project data that are relevant to that individual’s task · Enabled the company to monitor project progress in comparison to the projected plan, based on physical project assets to determine if each project is on track to conclude within its time and budget limitations To view the full list of solutions view here. “Oracle Gold Partner Parma Telecom was key to our successful Primavera deployment, implementing the software’s basic functionalities, such as project content, timeframes management, and cost management, in addition to performing its integration with our enterprise resource planning system and intranet portal within ten months and in accordance with budgets,” said Rafik Baynazarov, head of the master planning and control office, LUKOIL Mid-East Ltd. “ To read the full version of the customer success story, please view here.

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • The Buzz at the JavaOne Bookstore

    - by Janice J. Heiss
    I found my way to the JavaOne bookstore, a hub of activity. Who says brick and mortar bookstores are dead? I asked what was hot and got two answers: Hadoop in Practice by Alex Holmes was doing well. And Scala for the Impatient by noted Java Champion Cay Horstmann also seemed to be a fast seller. Hadoop in PracticeHadoop is a framework that organizes large clusters of computers around a problem. It is touted as especially effective for large amounts of data, and is use such companies as  Facebook, Yahoo, Apple, eBay and LinkedIn. Hadoop in Practice collects nearly 100 Hadoop examples and presents them in a problem/solution format with step by step explanations of solutions and designs. It’s very much a participatory book intended to make developers more at home with Hadoop.The author, Alex Holmes, is a senior software engineer with more than 15 years of experience developing large-scale distributed Java systems. For the last four years, he has gained expertise in Hadoop solving Big Data problems across a number of projects. He has presented at JavaOne and Jazoon and is currently a technical lead at VeriSign.At this year’s JavaOne, he is presenting a session with VeriSign colleague, Karthik Shyamsunder called “Java: A Perfect Platform for Data Science” where they will explain how the Java platform has emerged as a perfect platform for practicing data science, and also talk about such technologies as Hadoop, Hive, Pig, HBase, Cassandra, and Mahout. Scala for the ImpatientSan Jose State University computer science professor and Java Champion Cay Horstmann is the principal author of the highly regarded Core Java. Scala for the Impatient is a basic, practical introduction to Scala for experienced programmers. Horstmann has a presentation summarizing the themes of his book on at his website. On the final page he offers an enticing summary of his conclusions:* Widespread dissatisfaction with Java + XML + IDEs               --Don't make me eat Elephant again * A separate language for every problem domain is not efficient               --It takes time to master the idioms* ”JavaScript Everywhere” isn't going to scale* Trend is towards languages with more expressive power, less boilerplate* Will Scala be the “one ring to rule them”?* Maybe              --If it succeeds in industry             --If student-friendly subsets and tools are created The popularity of both books echoed comments by IBM Distinguished Engineer Jason McGee who closed his part of the Sunday JavaOne keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers increasingly must know the strengths and weaknesses of such languages going forward.

    Read the article

  • Grub Rescue Unknown Filesystem Error. Grub Corrupted or Filesystem?

    - by nightcrawler
    Now it has happened twice & have been pulling my hairs now... I have installed xubuntu on my external hardisk & have been using it for about 3 months. It has three partitions, one of 500 mb mounted at /boot, 2nd one of 48gb mounted at / & the rest (out of 160gb) is ntfs partition....used as normal external storage. The last storage supposedly acts as a buffer b/w Linux distributions & Win platform, buffer in the sense that it provides a universal channel for data transfers. I have constantly used this external hardisk for data transfers b/w win7 laptop & xubuntu (on this external hd) without any hassle. However, on of my desktops where I have ubuntu I (for the first time) attached this external drive which let me do data transfers where all three partitions properly mounted....but then nasty thing occurred the same that occurred before. I (as usual) tried booting via this external hd (one having xubuntu, one having being formerly used under Ubuntu) I got error Now I am totally devastated because similar thing happened ~6months before when I had fedora 17 in my external hd (instead of xubuntu) & after it was used under ubuntu the same happened...i didn't reported it because I already had planned towards debian instead of rpm! The mystery is that as long as I don't attach this external hd under ubuntu the data never** corrupts whereas under win xp/7 I can use it as a normal usb storage of coarse linux partitions aren’t available under win platforms... **From corrupts I mean hd fails to boot with the error mentioned however cant say whether data within remains untouched? It seems that my grub & or MBR is corrupted. Please sir guide me to solve this issue also why I cant attach & use linux external hds under linux platform Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004e7d0 Device Boot Start End Blocks Id System /dev/sdc1 * 2048 976895 487424 83 Linux /dev/sdc2 978942 96874495 47947777 5 Extended /dev/sdc3 96874496 312575999 107850752 7 HPFS/NTFS/exFAT /dev/sdc5 978944 94726143 46873600 83 Linux /dev/sdc6 94728192 96874495 1073152 82 Linux swap / Solaris I can recall for sure that have seen a thread here when a similar problem occurred & in response someone gave solution of how to mount (now invisible) partitions & recover important data in them. I have misplaced that URL so if any can guide me thither because my important documents resides in / partition What I already have done: Without success I have tried this & related solutions What I plan to do: I believe that filesystem has corrupted & would you recommend solution like this provided I cant recall whether my /boot (500mb) partition was ext4 or ext2 though I am sure that my / (48gb) partition was ext4 UPDATE 1 Attached my external hd under Ubuntu ran followinf command as root grub-install /dev/sdc where /dev/sdc was my external hd containing corrupted xubuntu....it reported all done! I re-ran fdisk -l but to my disappointment it reported Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b6b9167 Disk /dev/sdc doesn't contain a valid partition table ...& now I can't even access its ntfs partition (former /dev/sdc3) please help? UPDATE 2 TestDisk (by cgsecurity) failed at founding any partition table :( TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sdc - 160 GB / 149 GiB - CHS 19457 255 63 Partition Start End Size in sectors

    Read the article

  • What does the Spring framework do? Should I use it? Why or why not?

    - by sangfroid
    So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get people to explain what exactly Spring is or what it does, they can never give me a straight answer. I've checked the intros on the SpringSource site, and they're either really complicated or really tutorial-focused, and none of them give me a good idea of why I should be using it, or how it will make my life easier. Sometimes people throw around the term "dependency injection", which just confuses me even more, because I think I have a different understanding of what that term means. Anyway, here's a little about my background and my app : Been developing in Java for a while, doing back-end web development. Yes, I do a ton of unit testing. To facilitate this, I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. The one that uses instance variables calls the other one, supplying the instance variables. When it comes time to unit test, I use Mockito to mock up the objects and then make calls to the method that doesn't use instance variables. This is what I've always understood "dependency injection" to be. My app is pretty simple, from a CS perspective. Small project, 1-2 developers to start with. Mostly CRUD-type operations with a a bunch of search thrown in. Basically a bunch of RESTful web services, plus a web front-end and then eventually some mobile clients. I'm thinking of doing the front-end in straight HTML/CSS/JS/JQuery, so no real plans to use JSP. Using Hibernate as an ORM, and Jersey to implement the webservices. I've already started coding, and am really eager to get a demo out there that I can shop around and see if anyone wants to invest. So obviously time is of the essence. I understand Spring has quite the learning curve, plus it looks like it necessitates a whole bunch of XML configuration, which I typically try to avoid like the plague. But if it can make my life easier and (especially) if make it can make development and testing faster, I'm willing to bite the bullet and learn Spring. So please. Educate me. Should I use Spring? Why or why not?

    Read the article

  • Creating a frozen bubble clone

    - by Vaughan Hilts
    This photo illustrates the environment: http://i.imgur.com/V4wbp.png I'll shoot the cannon, it'll bounce off the wall and it's SUPPOSED to stick to the bubble. It does at pretty much every other angle. The problem is always reproduced here, when hit off the wall into those bubbles. It also exists in other cases, but I'm not sure what triggers it. What actually happens: The ball will sometimes set to the wrong cell, and my "dropping" code will detect it as a loner and drop it off the stage. *There are many implementations of "Frozen Bubble" on the web, but I can't for the life of me find a good explanation as to how the algorithm for the "Bubble Sticking" works. * I see this: http://www.wikiflashed.com/wiki/BubbleBobble https://frozenbubblexna.svn.codeplex.com/svn/FrozenBubble/ But I can't figure out the algorithims... could anyone explain possibly the general idea behind getting the balls to stick? Code in question: //Counstruct our bounding rectangle for use var nX = currentBall.x + ballvX * gameTime; var nY = currentBall.y - ballvY * gameTime; var movingRect = new BoundingRectangle(nX, nY, 32, 32); var able = false; //Iterate over the cells and draw our bubbles for (var x = 0; x < 8; x++) { for (var y = 0; y < 12; y++) { //Get the bubble at this layout var bubble = bubbleLayout[x][y]; var rowHeight = 27; //If this slot isn't empty, draw if (bubble != null) { var bx = 0, by = 0; if (y % 2 == 0) { bx = x * 32 + 270; by = y * 32 + 45; } else { bx = x * 32 + 270 + 16; by = y * 32 + 45; } //Check var targetBox = new BoundingRectangle(bx, by, 32, 32); if (targetBox.intersects(movingRect)) { able = true; } } } } cellY = Math.round((currentBall.y - 45) / 32); if (cellY % 2 == 0) cellX = Math.round((currentBall.x - 270) / 32); else cellX = Math.round((currentBall.x - 270 - 16) / 32); Any ideas are very much welcome. Things I've tried: Flooring and Ceiling values Changing the wall bounce to a lower value Slowing down the ball None of these seem to affect it. Is there something in my math I'm not getting?

    Read the article

  • Java EE 7 Roadmap

    - by Linda DeMichiel
    The Java EE 6 Platform, released in December 2009, has seen great uptake from the community with its POJO-based programming model, lightweight Web Profile, and extension points. There are now 13 Java EE 6 compliant appserver implementations today! When we announced the Java EE 7 JSR back in early 2011, our plans were that we would release it by Q4 2012. This target date was slightly over three years after the release of Java EE 6, but at the same time it meant that we had less than two years to complete a fairly comprehensive agenda — to continue to invest in significant enhancements in simplification, usability, and functionality in updated versions of the JSRs that are currently part of the platform; to introduce new JSRs that reflect emerging needs in the community; and to add support for use in cloud environments. We have since announced a minor adjustment in our dates (to the spring of 2013) in order to accommodate the inclusion of JSRs of importance to the community, such as Web Sockets and JSON-P. At this point, however, we have to make a choice. Despite our best intentions, our progress has been slow on the cloud side of our agenda. Partially this has been due to a lack of maturity in the space for provisioning, multi-tenancy, elasticity, and the deployment of applications in the cloud. And partially it is due to our conservative approach in trying to get things "right" in view of limited industry experience in the cloud area when we started this work. Because of this, we believe that providing solid support for standardized PaaS-based programming and multi-tenancy would delay the release of Java EE 7 until the spring of 2014 — that is, two years from now and over a year behind schedule. In our opinion, that is way too long. We have therefore proposed to the Java EE 7 Expert Group that we adjust our course of action — namely, stick to our current target release dates, and defer the remaining aspects of our agenda for PaaS enablement and multi-tenancy support to Java EE 8. Of course, we continue to believe that Java EE is well-suited for use in the cloud, although such use might not be quite ready for full standardization. Even today, without Java EE 7, Java EE vendors such as Oracle, Red Hat, IBM, and CloudBees have begun to offer the ability to run Java EE applications in the cloud. Deferring the remaining cloud-oriented aspects of our agenda has several important advantages: It allows Java EE Platform vendors to gain more experience with their implementations in this area and thus helps us avoid risks entailed by trying to standardize prematurely in an emerging area. It means that the community won't need to wait longer for those features that are ready at the cost of those features that need more time. Because we have already laid some of the infrastructure for cloud support in Java EE 7, including resource definition metadata, improved security configuration, JPA schema generation, etc., it will allow us to expedite a Java EE 8 release. We therefore plan to target the Java EE 8 Platform release for the spring of 2015. This shift in the scope of Java EE 7 allows us to better retain our focus on enhancements in simplification and usability and to deliver on schedule those features that have been most requested by developers. These include the support for HTML 5 in the form of Web Sockets and JSON-P; the simplified JMS 2.0 APIs; improved Managed Bean alignment, including transactional interceptors; the JAX-RS 2.0 client API; support for method-level validation; a much more comprehensive expression language; and more. We feel strongly that this is the right thing to do, and we hope that you will support us in this proposed direction.

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • How to remove the boundary effects arising due to zero padding in scipy/numpy fft?

    - by Omkar
    I have made a python code to smoothen a given signal using the Weierstrass transform, which is basically the convolution of a normalised gaussian with a signal. The code is as follows: #Importing relevant libraries from __future__ import division from scipy.signal import fftconvolve import numpy as np def smooth_func(sig, x, t= 0.002): N = len(x) x1 = x[-1] x0 = x[0] # defining a new array y which is symmetric around zero, to make the gaussian symmetric. y = np.linspace(-(x1-x0)/2, (x1-x0)/2, N) #gaussian centered around zero. gaus = np.exp(-y**(2)/t) #using fftconvolve to speed up the convolution; gaus.sum() is the normalization constant. return fftconvolve(sig, gaus/gaus.sum(), mode='same') If I run this code for say a step function, it smoothens the corner, but at the boundary it interprets another corner and smoothens that too, as a result giving unnecessary behaviour at the boundary. I explain this with a figure shown in the link below. Boundary effects This problem does not arise if we directly integrate to find convolution. Hence the problem is not in Weierstrass transform, and hence the problem is in the fftconvolve function of scipy. To understand why this problem arises we first need to understand the working of fftconvolve in scipy. The fftconvolve function basically uses the convolution theorem to speed up the computation. In short it says: convolution(int1,int2)=ifft(fft(int1)*fft(int2)) If we directly apply this theorem we dont get the desired result. To get the desired result we need to take the fft on a array double the size of max(int1,int2). But this leads to the undesired boundary effects. This is because in the fft code, if size(int) is greater than the size(over which to take fft) it zero pads the input and then takes the fft. This zero padding is exactly what is responsible for the undesired boundary effects. Can you suggest a way to remove this boundary effects? I have tried to remove it by a simple trick. After smoothening the function I am compairing the value of the smoothened signal with the original signal near the boundaries and if they dont match I replace the value of the smoothened func with the input signal at that point. It is as follows: i = 0 eps=1e-3 while abs(smooth[i]-sig[i])> eps: #compairing the signals on the left boundary smooth[i] = sig[i] i = i + 1 j = -1 while abs(smooth[j]-sig[j])> eps: # compairing on the right boundary. smooth[j] = sig[j] j = j - 1 There is a problem with this method, because of using an epsilon there are small jumps in the smoothened function, as shown below: jumps in the smooth func Can there be any changes made in the above method to solve this boundary problem?

    Read the article

  • Driving Growth through Smarter Selling

    - by Samantha.Y. Ma
    With the proliferation of social media and mobile technologies, the world of selling and buying has drastically changed, as buyers now have access to more information than they did in the past. In fact, studies have shown that buyers complete 60 percent of the buying process before they even engage with a salesperson. The old models of selling no longer work effectively; and the new way of selling is driven by customer insights. To succeed, sales need to be proactive, not reactive. They need to engage with the customer early, sometimes even before the customer’s needs are fully understood. In fact, the best sales reps prescribe a solution that the customer doesn't even know they need, often by leveraging social media to listen, engage and collaborate with peers. And they fully tap into the power of analytics and data to drive results.  Let’s look at some stats regarding challenges facing sales today. According to recent studies, sales reps spend 78 percent of their time doing administrative things -- such as planning, searching for information, data entry -- and only 22 percent of the time actually selling. Furthermore, 40 percent of B2B sales reps miss their quota, and only 3 percent of companies can say with confidence that their forecasts are “always accurate.” How do you drive growth in this modern day and age? It's not just getting your sales teams to work harder; it's helping them work smarter and providing them with a solution they want to use, on the device(s) they already know, giving them critical insights and tools to be more productive, increase win rates, and close deals faster. Oracle Sales Cloud was designed to do exactly that. It enables smarter selling that allows reps to sell more, managers to know more, and companies to grow more.  Let’s face it—if all CRM solutions worked well, sales executives wouldn’t be having the same headaches as they had in the past. Join Oracle’s Thomas Kurian and Doug Clemmans on Tuesday, October 22 as they explain: • How today’s sales processes have rendered many CRM systems obsolete • The secrets to smarter selling, leveraging mobile, social, and big data • How Oracle Sales Cloud enables smarter selling—as proven by Oracle and its customers Take the first step down the path toward smarter selling. With Oracle Sales Cloud, reps sell more, managers know more, and companies grow more.

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >