Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 254/375 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Bluetooth Dial-Up Networking using Blueman

    - by leemes
    I want to configure a dial up network connection via bluetooth to my phone in order to access the internet. I use Lubuntu 12.04 (Ubuntu with LXDE) which has the Network Manager Applet and Blueman applet installed. I guess these are the same tools than on an Ubuntu installation, hence I ask my question on this site. My phone is a Sony Ericsson W810i, my laptop is a Lenovo S10-2, my mobile phone provider is o2 Germany. I scanned for my mobile phone using the Blueman applet. I connected the dial-up network via the context menu - Serial Ports - Dial-up Networking. A notification bubble says that the connection is available on the interface named ppp0. ipconfig is telling something different: There is no ppp0 or something similar. I only see my eth0 (wired ethernet), eth1 (wifi) and lo interfaces. Of course, I can't ping google.com as the interface really seems to be not present at all. When the dial-up network is being connected, my mobile phone says that it connects to the internet. Afterwards, I see the active connection on the phone's screen. When successfully connecting with the phone using another computer, it behaves exactly the same, so I guess that the phone isn't the problem. I don't know if I configured the Dial-Up correctly. I use the phone number *99# which is very common on most mobile ISPs. I use the APN which my ISP is telling me to use. (I can't find the number on their support page, so I just use the default value *99#.) My mobile ISP is o2 Germany. There are How-Tos out there which use the Network Manager Applet to setup a bluetooth dial-up connection, but I can't see any bluetooth devices in the context menu as on the screenshots in those How-Tos. Do you have any suggestions what might be wrong / what I should try? EDIT: When choosing "Network Access Point" in the device's context menu instead of Serial Ports - Dial-Up Networking, an interface bnep0 appears. However, neither an IPv4 address is assigned for that interface (but IPv6), nor the phone connects to the internet. Am I missing something? Can I connect to the internet after setting up this network connection?

    Read the article

  • JavaOne Rock Star – Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 2011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.    Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.”   He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien remarked, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.” Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here.

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Framework 4 Features: User Propogation to the Database

    - by Anthony Shorten
    Once of the features I mentioned in a previous entry was the ability for Oracle Utilities Application Framework V4 to automatically propogate the end user to the database connection. This bears more explanation. In the past releases of the Oracle Utilities Application Framework, all database connections are pooled and shared within a channel of access. So for example, the online connections on the Business Application Server share a common pool of connections and the batch in a thread pool shares a seperate pool of connections. The connections are pooled for performance reasons (the most expensive part of a typical transaction is opening and closing connections so we save time by having them ready beforehand). The idea is that when a business function needs some SQL to be execute it takes a spare connection from the pool, executes the SQL and then returns the connection back to the pool for reuse. Unfortunelty to support the pool being started and ready before the transactions arrives means that you need to have a shared userid (as you dont know the users who need them beforehand). Therefore each connection uses the same database user to execute the SQL it needs. This is acceptable for executing transactions, generally but does not allow the DBA or other tools to ascertain which end user is actually running the transaction. In Oracle Utilities Application Framework V4, we now set the CLIENT_IDENTIFIER to the end userid (not the Login Id) when the connection is taken from the pool and used and reset it back to blank when returned to the pool. The CLIENT_IDENTIFIER is a feature that is present in the Oracle Database connection information. From a monitoring perspective, when a connection to the database is actively running SQL, the end user is now able to be determined by querying the CLIENT_IDENTIFIER on the session object within the database. This can be done in the DBA's favorite monitoring tool (even just some SQL on the v$session table is enough). This has other implications as well. Oracle sells a lot of other security addons to the database and so do third parties. If a site wants to have additional levels of security or auditing in the database then the CLIENT_IDENTIFIER, if supported, is now available to be recorded or used by those products to provide additional levels of security. This facility was one of the highly "nice to haves" that customers would ask us about so we now allow it to be used to allow finer grained monitoring and additional security facilities. Note: This facility is only available for customers using the Oracle Database versions of our products.

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

  • OTN Latinoamérica Tour 2012

    - by Dana Singleterry
    Better late than never. Sorry for the delay on getting this content up for all of you and thanks again for your attendance. A number of excellent questions came out of the sessions I delivered and herein I'm providing you with the content, in pdf format, for those sessions. I'm also providing pointers to Forms to ADF integration/migration as well as some details around OAF as used in E-Business Suite and ADF. Here's the sessions delivered by location. Click on any of the links to download the session content in pdf format. Montevideo Uruguay: Is Oracle ADF Simpler than Oracle Forms? Understanding the Fusion Development Platform Building Web Data Dashboards Without Coding Buenos Aires, Argentina: Is Oracle ADF Simpler than Oracle Forms? Developing Cross Device Mobile Applications Sao Paulo, Brazil Understanding the Fusion Development Platform Is Oracle ADF Simpler than Oracle Forms? A brief note on Form Integration & Migration: Does your organization have an Oracle Forms application that you'd like to migrate to ADF? Or, perhaps you're an Oracle Forms Developer and want to modernize your application development skills? If so, you've come to the right place! This section will strive to answer common questions that arise as you move from Forms to ADF. Our Oracle Forms Statement of Direction points out that Oracle is committed to the long-term support of Oracle Forms and Reports. However, many customers feel they are outgrowing their Forms applications. Users are demanding more sophisticated and interactive users interfaces. Executives are requiring SOA-enabled applications that integrate with peripheral services. Development leads are encouraging a more modern approach to application development, including adherence to design patterns like MVC. So even as Oracle still supports Forms, the list of reasons to move off of it is becoming more compelling and is only gaining further momentum by the fact that Oracle's own Fusion Applications are using ADF. Developers and organizations looking to align with both the technology stack and look-and-feel of Fusion Applications are choosing ADF, and thus reaping the benefits of years of best practices in enterprise application development that are baked into the ADF framework. So, if you decide to migrate off of Forms for any of these reasons, ADF is the way to go. Grant Ronald has published a video of our position on the subject, along with an ODTUG article explaining our direction. These materials explain that there are other migration tools/frameworks/paths, but the best choice is usually to follow Gartner's recommendation that if you are going to migrate off of Oracle Forms, ADF is the least risky and least costly migration path. Please visit the Oracle Forms page here. For details around OAF as used in E-Business Suite (EBS) and when to use ADF with EBS you can review the following blogs from Shay Shmeltzer. To ADF or to OAF? or Can I use ADF with Oracle E-Business Suite?

    Read the article

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Izenda Reports 6.3 Top 10 Features

    - by gt0084e1
    Izenda 6.3 Top 10 New Features and Capabilities 1. Izenda Maps Add-On The Izenda Maps add-on allows rapid visualization of geographic or geo-spacial data.  It is fully integrated with the the rest of Izenda report package and adds a Maps tab which allows users to add interactive maps to their reports. Contact your representative or [email protected] for limited time discounts. Izenda Maps even has rich drill-down capabilities that allow you to dive deeper with a simple hover (also requires dashboards). 2. Streamlined Pie Charts with "Other" Slices The advanced properties of the Pie Chart now allows you to combine the smaller slices into a single "Other" slice. This reduces the visual complexity without throwing off the scale of the chart. Compare the difference below. 3. Combined Bar + Line Charts The Bar chart now allows dual visualization of multiple metrics simultaneously by adding a line for secondary data. Enabled via AdHocSettings.AllowLineOnBar = true; 4. Stacked Bar Charts The stacked bar chart lets you see a breakdown of a measure based on categorical data.  It is enabled with the following code. AdHocSettings.AllowStackedBarChart = true; 5. Self-Joining Data Sources The self-join features allows for parent-child relationships to be accessed from the Data Sources tab. The same table can be used as a secondary child table within the Report Designer. 6. Report Design From Dashboard View Dashboards now sport both view and design icons to allow quick access to both. 7. Field Arithmetic on Dates Differences between dates can now be used as measures with the arithmetic feature. 8. Simplified Multi-Tenancy Integrating with multi-tenant systems is now easier than ever. The following APIs have been added to facilitate common scenarios. AdHocSettings.CurrentUserTenantId = value; AdHocSettings.SchedulerTenantID = value; AdHocSettings.SchedulerTenantField = "AccountID"; 9. Support For SQL 2008 R2 and SQL Azure Izenda now supports the latest version of Microsoft's database as well as the SQL Azure service. 10. Enhanced Performance and Compatibility for Stored Procedures Izenda now supports more stored procedures than ever and runs them faster too.

    Read the article

  • Oracle releases new Java Embedded products

    - by Henrik Stahl
    With less than one week to go to JavaOne 2012, we've spiced things up a little by releasing not one but two net new embedded Java products. This is an important step towards realizing the vision of Java as the standard platform for the Internet of Things that I outlined in a recent blog post. The two new products are: Java ME Embedded 3.2. Based on same code as the widely deployed Oracle Java Wireless Client for feature phones, this new product provides a Java ME implementation optimized for very small microcontroller-based devices and adds - among other things - a new Device Access API that enables interaction with peripherals common in edge devices such as various types of sensors. In addition to the new Java ME Embedded platform, we have also released an update of the Java ME SDK which adds support for the development of small embedded devices. Java Embedded Suite 7.0. This is an integrated middleware stack for embedded devices, incorporating Java SE Embedded and versions of JavaDB, GlassFish and a Web Services stack optimized for remote operation and small footprint. A typical Internet of Things (or M2M) infrastructure contains three types of compute nodes: The edge device which is typically a sensor or control point of some kind. These devices can be connected directly to a backend through a mobile network if they are installed in - for example - a remote vending machine; or, they can be part of a local short-range network and be connected to the backend through a more powerful gateway device. A gateway is the second type of compute node and acts as an aggregator and control point for a local network. A good example of this could be a generalized home Internet access point, or home gateway. Gateways are mostly using normal wall power and are used for multiple applications, deployed by multiple service providers. Finally, the last type of compute node is the normal enterprise or cloud backend. Java ME Embedded and Java Embedded Suite are perfect base software stacks for the edge devices and the gateway respectively, providing the Java promise of a platform independent runtime and a complete set of libraries as well as allowing a programmer to focus on the business logic rather than plumbing. We are very thrilled with these new releases that open up exciting opportunities for Java developers to extend services and enterprise applications in ways that will make organizations more efficient and touch our daily lives. To find out more, come to the JavaOne conference (for technical content) and to the Java Embedded @ JavaOne subconference (for business content). There will be plenty of cool demos showing complete end-to-end applications, provided by Oracle and our partners, as well as keynotes and numerous sessions where you can learn more about the technology and business opportunities.

    Read the article

  • Most Unprofessional Workplace

    - by TehGrumpyCoder
    I've worked lots of places in lots of roles: Delivery truck driver, Boilermaker, antenna rigger, Professional Musician, Electronic Technician, Electrical Engineer, and for most of my career: Software Turkey. I want to say this large company is the most unprofessional place I've ever worked, but then I think about other jobs such as TTI that stiffed us all for 10 months salary -- or had us work 2-1/2 years at 66% however you want to look at it, or maybe NeoPlanet with a cast from a bad sitcom running the show, I could go on, but I digress (as usual). So maybe this place isn't the *most* unprofessional, but the personnel rank up there. I'm in a small room off a factory. There are 3 managerial offices, and 36 common-folk of various skill-sets in a variety of single to quad cubicles. No matter where you sit though, because of the layout and location, you've got a hard wall as one wall of your cubicle. Because of that hard wall, everything echoes. I get off the phone, and the guy in the next cubicle makes a comment in response to my phone conversation... I hate that it can be heard and I hate that they do that! These people have no problem yelling from cube to cube to carry on running conversations some of which are actually work-related. There's a lady two cubes away that talks so loud I can clearly hear every phone conversation she has... all work-related but still... Then the one in the next cubicle must have been raised on a farm because there's only one volume setting: LOUD... "HEY MARGE, CAN I GET IN FOR A QUICK APPOINTMENT AFTER WORK TONIGHT?" ... sigh Also that cube is the 'party cube' so that's where all the candy, cake, donuts, and leftovers sits. Anything MzLoud brings in has to have a verbal recipe associated with it at least 10 times during the day, and of course at volume. I've had running conversations over the top of my cube from people in the next one on each side. The weird thing is... the boss sits with an open door closer to this whole fiasco than me. So I wear a pair of Bose noise-cancelling headphones, and crank up Kenny Burrell, Herb Ellis, Wes Montgomery, or Jimmy Smith to the point I can't hear the racket... what the heck, I already have a hearing loss from playing guitar.

    Read the article

  • What is a resonable workflow for designing webapps?

    - by Evan Plaice
    It has been a while since I have done any substantial web development and I'd like to take advantage of the latest practices but I'm struggling to visualize the workflow to incorporate everything. Here's what I'm looking to use: CakePHP framework jsmin (JavaScript Minify) SASS (Synctactically Awesome StyleSheets) Git CakePHP: Pretty self explanatory, make modifications and update the source. jsmin: When you modify a script, do you manually run jsmin to output the new minified code, or would it be better to run a pre-commit hook that automatically generates jsmin outputs of javascript files that have changed. Assume that I have no knowledge of implementing commit hooks. SASS: I really like what SASS has to offer but I'm also aware that SASS code isn't supported by browsers by default so, at some point, the SASS code needs to be transformed to normal CSS. At what point in the workflow is this done. Git I'm terrified to admit it but, the last time I did any substantial web development, I didn't use SCM source control (IE, I did use source control but it consisted of a very detailed change log with backups). I have since had plenty of experience using Git (as well as mercurial and SVN) for desktop development but I'm wondering how to best implement it for web development). Is it common practice to implement a remote repository on the web host so I can push the changes directly to the production server, or is there some cross platform (windows/linux) tool that makes it easy to upload only changed files to the production server. Are there web hosting companies that make it eas to implement a remote repository, do I need SSH access, etc... I know how to accomplish this on my own testing server with a remote repository with a separate remote tracking branch already but I've never done it on a remote production web hosting server before so I'm not aware of the options yet. Extra: I was considering implementing a javascript framework where separate javascript files used on a page are compiled into a single file for each page on the production server to limit the number of file downloads needed per page. Does something like this already exist? Is there already an open source project out in the wild that implements something similar that I could use and contribute to? Considering how paranoid web devs are about performance (and the fact that the number of file requests on a website is a big hit to performance) I'm guessing that there is some wizard hacker on the net who has already addressed this issue.

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • Please help me give this principle a name

    - by Brent Arias
    As a designer, I like providing interfaces that cater to a power/simplicity balance. For example, I think the LINQ designers followed that principle because they offered both dot-notation and query-notation. The first is more powerful, but the second is easier to read and follow. If you disagree with my assessment of LINQ, please try to see my point anyway; LINQ was just an example, my post is not about LINQ. I call this principle "dial-able power". But I'd like to know what other people call it. Certainly some will say "KISS" is the common term. But I see KISS as a superset, or a "consumerism" practice. Using LINQ as my example again, in my view, a team of programmers who always try to use query notation over dot-notation are practicing KISS. Thus the LINQ designers practiced "dial-able power", whereas the LINQ consumers practice KISS. The two make beautiful music together. I'll give another example. Imagine a C# logging tool that has two signatures allowing two uses: void Write(string message); void Write(Func<string> messageCallback); The purpose of the two signatures is to fulfill these needs: //Every-day "simple" usage, nothing special. myLogger.Write("Something Happened" + error.ToString() ); //This is performance critical, do not call ToString() if logging is //disabled. myLogger.Write( () => { "Something Happened" + error.ToString() }); Having these overloads represents "dial-able power," because the consumer has the choice of a simple interface or a powerful interface. A KISS-loving consumer will use the simpler signature most of the time, and will allow the "busy" looking signature when the power is needed. This also helps self-documentation, because usage of the powerful signature tells the reader that the code is performance critical. If the logger had only the powerful signature, then there would be no "dial-able power." So this comes full-circle. I'm happy to keep my own "dial-able power" coinage if none yet exists, but I can't help think I'm missing an obvious designation for this practice. p.s. Another example that is related, but is not the same as "dial-able power", is Scott Meyer's principle "make interfaces easy to use correctly, and hard to use incorrectly."

    Read the article

  • Economic modelling - Resources for valuing goods

    - by Rushyo
    tl;dr: What economic/computer science books would you suggest for learning about economic valuation of goods and simulations thereof? I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling?

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

  • links for 2011-01-06

    - by Bob Rhubart
    Coming to your town: Oracle Enterprise Cloud Summit During these full-day events, cloud experts will share real-world best practices, reference architectures, detailed customer case studies, and more. Events scheduled in cities around the world.  (tags: oracle otn cloud event) Webcast: Security and Compliance for Private Cloud Consolidation Roxana Bradescu, Senior Director for Oracle Database Security Products, discusses Oracle Database Security Solutions to securely consolidate data and meet compliance requirements within private cloud computing environments. Thursday, January 13, 2011. 10am PST | 1pm EST (tags: oracle cloud security) Answering Questions about Mobile Devices | The AppsLab "How do the numbers of Android and iOS users compare? How often are people switching? Where are all these BlackBerry and Nokia users? Do they plan to jump to Android or iOS? What about webOS? Is it relevant?" Some answers in this AppsLab survey. (tags: oracle otn enterprise2.0 mobilecomputing iphone blackberry android) Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Ashish Ray and Matthew Baier discuss Oracle’s Maximum Availability Architecture and Oracle Database 11g. (tags: oracle cloud highavailability webcast) Converting a PV vm back into an HVM vm (Wim Coekaerts Blog) "I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel...It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) @OTN_Garage: Resources for VirtualBox 4.0 Rick "@OTN_Garage" Ramsey shares links to several resources for those with a VirtualBox jones. (tags: oracle otn virtualization virtualbox) 'Federal Service Bus' Helps Belgian Government Speak a Common Language - SOA in Action Blog "The first SOA-enabled application was developed in less than two months and was fully operational in approximately 10 weeks. In addition, new FSB modules are reusable for other Belgian e-government applications, saving both time and taxpayer dollars." - Joe McKendrick (tags: soa oracle) Show Notes: Architects in the Cloud (ArchBeat Podcast) The complete 4-part interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing," is now available. (tags: oracle otn cloud podcast archbeat)

    Read the article

  • World Record Oracle E-Business Consolidated Workload on SPARC T4-2

    - by Brian
    Oracle set a World Record for the Oracle E-Business Suite Standard Medium multiple-online module benchmark using Oracle's SPARC T4-2 and SPARC T4-4 servers which ran the application and database. Oracle's SPARC T4 servers demonstrate performance leadership and world-record results on Oracle E-Business Suite Applications R12 OLTP benchmark by publishing the first result using multiple concurrent online application modules with Oracle Database 11g Release 2 running Solaris.   This results shows that a multi-tier configuration of SPARC T4 servers running the Oracle E-Business Suite R12.1.2 application and Oracle Database 11g Release 2 is capable of supporting 4,100 online users with outstanding response-times, executing a mix of complex transactions consolidating 4 Oracle E-Business modules (iProcurement, Order Management, Customer Service and HR Self-Service).   The SPARC T4-2 server in the application tier utilized about 65% and the SPARC T4-4 server in the database tier utilized about 30%, providing significant headroom for additional Oracle E-Business Suite R12.1.2 processing modules, more online users, and future growth.   Oracle E-Business Suite Applications were run in Oracle Solaris Containers on SPARC T4 servers and provides a consolidation platform for multiple E-Business instances.   Performance Landscape Multiple Online Modules (Self-Service, Order-Management, iProcurement, Customer-Service) Medium Configuration System Users AverageResponse Time 90th PercentileResponse Time SPARC T4-2 4,100 2.08 sec 2.52 sec Configuration Summary Application Tier Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 3 x 300 GB internal disks Oracle Solaris 10 Oracle E-Business Suite 12.1.2 Database Tier Configuration: 1 x SPARC T4-4 server 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 2 x 300 GB internal disks Oracle Solaris 10 Oracle Solaris Containers Oracle Database 11g Release 2 Storage Configuration: 1 x Sun Storage F5100 Flash Array (80 x 24 GB flash modules) Benchmark Description The Oracle R12 E-Business Suite Standard Benchmark combines online transaction execution by simulated users with multiple online concurrent modules to model a typical scenario for a global enterprise. The online component exercises the common UI flows which are most frequently used by a majority of our customers. This benchmark utilized four concurrent flows of OLTP transactions, for Order to Cash, iProcurement, Customer Service and HR Self-Service and measured the response times. The selected flows model simultaneous business activities inclusive of managing customers, services, products and employees. See Also Oracle R12 E-Business Suite Standard Benchmark Results Oracle R12 E-Business Suite Standard Benchmark Overview Oracle R12 E-Business Benchmark Description E-Business Suite Applications R2 (R12.1.2) Online Benchmark - Using Oracle Database 11g on Oracle's SPARC T4-2 and Oracle's SPARC T4-4 Servers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle E-Business Suite oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle E-Business Suite R12 medium multiple-online module benchmark, SPARC T4-2, SPARC T4, 2.85 GHz, 2 chips, 16 cores, 128 threads, 256 GB memory, SPARC T4-4, SPARC T4, 3.0 GHz, 4 chips, 32 cores, 256 threads, 256 GB memory, average response time 2.08 sec, 90th percentile response time 2.52 sec, Oracle Solaris 10, Oracle Solaris Containers, Oracle E-Business Suite 12.1.2, Oracle Database 11g Release 2, Results as of 9/30/2012.

    Read the article

  • Unable to add users to Microsoft Dynamics CRM 4.0 after database restore

    - by Wes Weeks
    Working with a client in our Multi-tenant CRM environment who was doing a database migration into CRM and as part of the process, a backup of their Organization_MSCRM database was taken just prior to starting the migration in case it needed to be restored and run a second time. In this case it did, so I restored the database and let the client know he should be good to go.  A few hours later I received a call that they were unable to add some new users, they would appear as available when using the add multiple user wizard, but anyone added would not be added to CRM.  It was also disucussed that these users had been added to CRM initally AFTER the database backup had been taken. I turned on tracing and tried to add the users through both the single user form and multiple user interface and was unable to do so.  The error message in the logs wasn't much help: Unexpected error adding user [email protected]: Microsoft.Crm.CrmException: INVALID_WRPC_TOKEN: Validate WRPC Token: WRPCTokenState=Invalid, TOKEN_EXPIRY=4320, IGNORE_TOKEN=False Searching on Google or bing didn't offer any assitance.  Apparently not a very common problem, or no one has been able to resolve. I did some searching in the MSCRM_CONFIG database and found that their are several user tables there and after getting my head around the structure found that there were enties here for users that were not part of the restored DB.  It seems that new users are added to both the Orgnaization_MSCRM and MSCRM_CONFIG and after the restore these were out of sync. I needed to remove the extra entries in order to address.  Restoring the MSCRM_CONFIG database was not an option as other clients could have been adding users at this point and to restore would risk breaking their instances of CRM.  Long story short, I was finally able to generate a script to remove the bad entries and when I tried to add users again, I was succesful.  In case someone else out there finds themselves in a similar situation, here is the script I used to delete the bad entries. DECLARE @UsersToDelete TABLE (   UserId uniqueidentifier )   Insert Into @UsersToDelete(UserId) Select UserId from [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where CrmuserId Not in (select systemuserid from Organization_MSCRM.dbo.SystemUserBase) And OrganizationId = '00000000-643F-E011-0000-0050568572A1' --Id From the Organization table for this instance   Delete From [MSCRM_CONFIG].[dbo].[SystemUserAuthentication]   Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUser] Where Id in (Select UserId From @UsersToDelete)

    Read the article

  • Oracle????????????????????????~????????????????????

    - by Yusuke.Yamamoto
    RDBMS ???????·????????????????????????????????????????????????????????????????????????? ????????Oracle ?????????????????????????????????? Oracle Database ???????????????????????????????? ????????????????????? ????Oracle???????????????????????????????????????????????????????????????????????????? ?????????????? Oracle Database ???????????????????????? ??????????????????????????????????2????????????? 1. ??????(Query Transformation) Query Transformation ???????SQL??????????????????SQL????????????????????? Query Transformation ???Predicate Transformation ? Common Sub-expression Elimination (CSE), Order-BY Elimination (OBYE), Outer Join Elimination (OJE), Simple View Meging (SVM), Predicate Move around (PM), Complex View Merging (CVM), Sub-query Unnesting (SU), Join Predicate Push Down (JPPD) ???? OR Expansion, Star Transformation (ST) ????????????? ···???????????????????????????????????????????????????? Predicate Transformation ?????? Transitive Predicate Generation ????????????? ?????????????SQL???deptno ? 10 ????????????????????????????? select e.ename, d.loc from emp e, dept d where e.deptno=d.deptno and e.deptno=10; ???????????????emp ??? deptno=10 ??????????????dept ??? d.deptno=10 ??????????????????? emp ?? deptno=10 ????????????????????emp ?? deptno=10 ??????10???????10? dept ????????????dept ??20???????????????????????10?*20?=200?????(??????????·?????????)? ??SQL?? Transitive Predicate Generation ??????SQL????????????????? select e.ename, d.location from emp e, dept d where e.deptno=d.deptno and e.deptno=10 and d.deptno=10; ^^^^^^^^^^^ ??????dept ?????? deptno=10 ??????????????????????????10?*1?=10(dept.deptno ?unique????)?1/20????????????????1/20????????????????10??????????30???????????????Query Transformation ???????????????????????????? ?:??????????? dept ?? 1-row table ??????dept ?? driving ???(Outer Table)??? emp ?? probe ???(Inner Table)????????????1?*10?=10 ????????????????????????????????????????????????????????1/20????????????? ?????? Query Transformation ??????SQL????????????????????????????????? Transformation ??????????????????????????????????? 2. ????·????(Access Path Analysis) Access Path Analysis ??Query Transformation ??SQL????????????(Access Path)?????????(Join Method)?????(Join Order)?????????? ??????????????????(FTS)?ROWID?????????????????????????????·?????(Nested Loop Join)???????(Hash Join)????/?????(Sort Merge Join)????????????????????????????????????????????????????????????????????????? Oracle Database ????????? Query Transformation ???? Logical Optimizer?Access Path Analysis ???? Physical Optimizer ????????? ??????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????? Oracle Database ????????????????????? "Oracle ????????" ?????????? Sustaining Engineering?? ?(??? ???) ???????????????? Sustaining Engineering ????????????????????????Oracle Database ???????????????????????? ?????????????????????Ruby????????????????????????? Oracle????????????????????????! Oracle????????????? Oracle????????????????????????

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • xsltproc killed, out of memory

    - by David Parks
    I'm trying to split up a 13GB xml file into small ~50MB xml files with this XSLT style sheet. But this process kills xsltproc after I see it taking up over 1.7GB of memory (that's the total on the system). Is there any way to deal with huge XML files with xsltproc? Can I change my style sheet? Or should I use a different processor? Or am I just S.O.L.? <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:param name="block-size" select="75000"/> <xsl:template match="/"> <xsl:copy> <xsl:apply-templates select="mysqldump/database/table_data/row[position() mod $block-size = 1]" /> </xsl:copy> </xsl:template> <xsl:template match="row"> <exsl:document href="chunk-{position()}.xml"> <add> <xsl:for-each select=". | following-sibling::row[position() &lt; $block-size]" > <doc> <xsl:for-each select="field"> <field> <xsl:attribute name="name"><xsl:value-of select="./@name"/></xsl:attribute> <xsl:value-of select="."/> </field> <xsl:text>&#xa;</xsl:text> </xsl:for-each> </doc> </xsl:for-each> </add> </exsl:document> </xsl:template>

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >