Search Results

Search found 13955 results on 559 pages for 'easy angel'.

Page 399/559 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • String contains trailing zeroes when converted from decimal [migrated]

    - by Locke
    I've run into an unusual quirk in a program I'm writing, and I was trying to figure out if anyone knew the cause. Note that fixing the issue is easy enough. I just can't figure out why it is happening in the first place. I have a WinForms program written in VB.NET that is displaying a subset of data. It contains a few labels that show numeric values (the .Text property of the labels are being assigned directly from the Decimal values). These numbers are being returned by a DLL I wrote in C#. The DLL calls a webservice which initially returns the values in question. It returns one as a string, the other as a decimal (I don't have any control over the webservice, I just consume it). The DLL assigns these to properties on an object (both of which are decimals) then returns that object back to the WinForm program that called the DLL. Obviously, there's a lot of other data being consumed from the webservice, but no other operations are happening which could modify these properties. So, the short version is: WinForm requests a new Foo from the DLL. DLL creates object Foo. DLL calls webservice, which returns SomeOtherFoo. //Both Foo.Bar1 and Foo.Bar2 are decimals Foo.Bar1 = decimal.Parse(SomeOtherFoo.Bar1); //SomeOtherFoo.Bar1 is a string equal to "2.9000" Foo.Bar2 = SomeOtherFoo.Bar2; //SomeOtherFoo.Bar2 is a decimal equal to 2.9D DLL returns Foo to WinForm. WinForm.lblMockLabelName1.Text = Foo.Bar1 //Inspecting Foo.Bar1 indicates my value is 2.9D WinForm.lblMockLabelName2.Text = Foo.Bar2 //Inspecting Foo.Bar2 also indicates I'm 2.9D So, what's the quirk? WinForm.lblMockLabelName1.Text displays as "2.9000", whereas WinForm.lblMockLabelname2.Text displays as "2.9". Now, everything I know about C# and VB indicates that the format of the string which was initially parsed into the decimal should have no bearing on the outcome of a later decimal.ToString() operation called on the same decimal. I would expect that decimal.Parse(someDecimalString).ToString() would return the string without any trailing zeroes. Everything I find online seems to corroborate this (there are countless Stack Overflow questions asking exactly the opposite...how to keep the formatting from the initial parsing). At the moment, I've just removed the trailing zeroes from the initial string that gets parsed, which has hidden the quirk. However, I'd love to know why it happens in the first place.

    Read the article

  • New Oracles VM RAC template with support for oracle vm 3 built-in

    - by wcoekaer
    The RAC team did it again (thanks Saar!) - another awesome set of Oracle VM templates published and uploaded to My Oracle Support. You can find the main page here. What's special about the latest version of DeployCluster is that it integrates tightly with Oracle VM 3 manager. It basically is an Oracle VM frontend that helps start VMs, pass arguments down automatically and there is absolutely no need to log into the Oracle VM servers or the guests. Once it completes, you have an entire Oracle RAC database setup ready to go. Here's a short summary of the steps : Set up an Oracle VM 3 server pool Download the Oracle VM RAC template from oracle.com Import the template into Oracle VM using Oracle VM Manager repository - import Create a public and private network in Oracle VM Manager in the network tab Configure the template with the right public and private virtual networks Create a set of shared disks (physical or virtual) to assign to the VMs you want to create (for ASM/at least 5) Clone a set of VMs from the template (as many RAC nodes as you plan to configure) With Oracle VM 3.1 you can clone with a number so one clone command for, say 8 VMs is easy. Assign the shared devices/disks to the cloned VMs Create a netconfig.ini file on your manager node or a client where you plan to run DeployCluster This little text file just contains the IP addresses, hostnames etc for your cluster. It is a very simple small textfile. Run deploycluster.py with the VM names as argument Done. At this point, the tool will connect to Oracle VM Manager, start the VMs and configure each one, Configure the OS (Oracle Linux) Configure the disks with ASM Configure the clusterware (CRS) Configure ASM Create database instances on each node. Now you are ready to log in, and use your x node database cluster. x No need to download various products from various websites, click on trial licenses for the OS, go to a Virtual Machine store with sample and test versions only - this is production ready and supported. Software. Complete. example netconfig.ini : # Node specific information NODE1=racnode1 NODE1VIP=racnode1-vip NODE1PRIV=racnode1-priv NODE1IP=192.168.1.2 NODE1VIPIP=192.168.1.22 NODE1PRIVIP=10.0.0.22 NODE2=racnode2 NODE2VIP=racnode2-vip NODE2PRIV=racnode2-priv NODE2IP=192.168.1.3 NODE2VIPIP=192.168.1.23 NODE2PRIVIP=10.0.0.23 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=mydomain.com DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=racnode12-scan SCANIP=192.168.1.50

    Read the article

  • Creating a Strong Bridge to the Post PC World

    - by Webgui
    Moving from location to location requires strong roads.  When crossing a barrier though, like a body of water or valley, we are required to build a strong bridge to get us from point A to point B in a way that is fast, safe, and easy.Yet we are not talking here about driving a car or riding a bus.  As we in the computing world are evidencing the move to the post-PC era, modernizing and migrating legacy applications to harness the power of HTML5 web, cloud and mobile is one of the most difficult challenges enterprises have faced.  Constant technological changes have weakened the business value of legacy systems, which have been developed over the years through huge investments.  There are several risks of course in this move.  Do you choose to simply rewrite code of legacy apps and transform them to HTML5 one by one?  This is quite expensive (according to research firm Gartner, the cost is $6 - $26 per line of code).  Of course, the pace of the rewriting process is very slow – around 170 lines per day for each developer – which slows down business productivity in a world in which no organization can afford to fall behind.  Other questions include whether the new cloud-based apps will have the same functionality as the trusted applications that worked for you for years.  How will the user experience be affected?  And of course, what about data security?  So we are faced with the challenge of building a sturdy bridge to stabilize our move in order to allow us to confidently and easily move our legacy applications into the post-PC era.   We at Gizmox are excited to release the first downloadable Community Technology Preview (CTP) of our Instant CloudMove Transposition Studio.Developers: To download the tool, and try it out for yourself, please visit http://www.visualwebgui.com/download.aspx.The CTP is the first and only tool-based solution allowing any Microsoft Visual Studio developer to extend VB6 and .NET enterprise client/server applications into HTML5 web, cloud and mobile applications, including the ability to upgrade their code and UI while doing so.   It is the only solution to fully replicate enterprise desktop applications behavior in the post-PC era.  With Instant CloudMove, the transposed application is available on any mobile or tablet device, browser and across any client operating system. Moreover, the extended application logic and data remains on the server behind the fire-wall and therefore the application’s front end is secured-by-design.   We would love for you to try out the tool for yourselves and let us know what you think.  How are you finding the move?

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • How does rc job work / order of (contradicting) "start on ..." and "stop on ..." stanzas

    - by Binarus
    Hi, I just can't understand how Upstart's rc job definition in Natty 11.04 works. To illustrate the problem, here is the definition (empty lines and comments are left out): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL Let's suppose we currently are in runlevel 2 and the rc job is stopped (that is exactly the situation after booting my box and logging in via SSH). Now, let's assume that the system switches to runlevel 3, for example due to a command like "telinit 3" given by root. What will happen to the rc job? Obviously, the rc job will be started since it is currently stopped and the event runlevel 3 is matching the start events. But from now on, things are unclear to me: According to the manual $RUNLEVEL evaluates to the new runlevel when the job is started (that means 3 in our example). Therefore, the next stanza "stop on runlevel [!$RUNLEVEL]" translates to "stop on runlevel [!3]"; that means we have a first stanza which will trigger the job, but the second stanza will never stop the job and seems to be useless. Since I know that the Ubuntu / Upstart people won't do useless things, I must be heavily misunderstanding something. I would be grateful for any explanation. While trying to understand this, an additional question came to my mind. If I had contradicting start and stop triggers, for example start on foo stop on foo what would happen? I swear I never will do that, but I am nevertheless very interested in how Upstart handles that on the theoretical level. Thank you very much! Editing the question as a reaction on geekosaur's first answer: I can see the parallelism, but it is not that easy (at least, not to me). Let's assume the job aurrently is still running, and a new runlevel event comes in (of course, the new runlevel is different from the current one). Then, the following should happen: 1) The job is single instance. That means that "start on ..." won't be triggered since the job is currently running; $RUNLEVEL is not touched. 2) "stop on ..." will be triggered since the new runlevel is different from $RUNLEVEL, so the job will be aborted. 3) Now, the job is stopped and waiting. I can't see how it is restarted with the new runlevel. AFAIK, initctl emits events only once, so "start on ..." won't be triggered and the new runlevel won't be entered. I know that I still misunderstanding something, and I am grateful for explanations. Thank you very much!

    Read the article

  • SOASuite 11.1.1.4 : Error Logging into BPM11g Composer?

    - by angelo.santagata
    Hey all, I’ve just installed SOA Suite 11.1.1.4 and noticed a few funnies which people might hit, thankfully each of them have an easy solution.   1. Some applications are installed but dont appear to work? If when you install SOASuite you may notice that the following applications dont appear to work, however they do appear as deployments in Weblogic Server Console e.g.  SOA Composer (composer), FMW Welcome Page Application (11.1.0.0.0) and some of the adaptors. If they appear in the deployments list state as “installed” and not Active, then its likely that they haven't been targeted to a specific server.   The solution is to target the application the desired managed server , e.g. AdminServer in a development environment. This is done by selecting the application, tab “Targets”, select all components, Button[change Targets] and select the appropriate server. This change can be done without restarting the Weblogic Server 2. You might find that when you try to log into the BPM Composer at http://machine:7001/bpm/composer , the login screen will appear but you cant log in. The error log might mention the following   The solution to this is two fold, a) When creating the domain, avoid using 127.0.0.1 as the Listener address, or “Any Addresses”, if this is a development machine create an alias in your /etc/hosts file and then use this alias in the domain creation wizard. e.g.   my host file contains an entry     mypc     127.0.0.1   And in the Fusion middleware Configuration wizard   Twill then work!   if it still doesnt you can try setting the ServerURL attribute to http://mypc in the SoaInfraMBean instead of blank. This is accomplished by using Enterprise Manager. Use the System MBean Browser to navigate to Application Defined MBeans->oracle.as.soainfra.config->[ server]->SoaInfraConfig->soa-infra. Then changing the value of 'ServerURL' to http://mypc   Failing that give support a call….

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • In a multidisciplinary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Are You an IT Geek? Why Not Write for How-To Geek?

    - by The Geek
    Are you a geek in the IT field that wants to share your skills with the world? We’re looking for an experienced Sysadmin / IT Admin / Webmaster geek with writing skills that wants to join our team on a part-time basis. Please apply if you have the following qualities: You must be a geek at heart, willing to try and make the boring world of IT sound glamorous and sexy. If that’s not possible, at least be willing to share your wisdom and skills to help other IT geeks save time and become better at what they do. You must be able to write articles that are easy to understand. Either Windows or Linux writers are welcome to apply. You must be able to follow our style guide. You must be creative. You must generate ideas for articles on your own, and take suggestions like a pro. You’re ambitious, looking to build your skills and your name, and are prepared to work hard. If you aren’t willing to work hard, put some dedication and pride into your work, or aren’t really interested in the topic, this job might not be for you. We’re looking for serious individuals that want to grow with us, and as we grow, you’ll grow as well. How To Apply If you think this job is a good fit for you, send an email to [email protected] and include some background information about yourself, why you’d be a good fit, some topic areas you are familiar with, and hopefully some examples of your work. Bonus points if you have a ninja costume and a keyboard strapped to your back. Similar Articles Productive Geek Tips What Topics Should The How-To Geek Write About?Got Awesome Skills? Why Not Write for How-To Geek?Got Awesome Geek Skills? The How-To Geek is Looking for WritersAbout the GeekThe How-To Geek Bounty Program TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Why is it necessary to install EFI/rEFInd/UEFI/... on a SD Card since the Macbook Pro seems to already have it?

    - by user170794
    Dear askubuntu members, I own a Macbook Pro (late 2009) and when I boot the laptop and hold the alt key meanwhile, there is a EFI screen, so EFI is installed on... the firmware? I had a few troubles with my hard disk, so I had to change it, but I haven't installed OS X, I have only installed Ubuntu and still the EFI screen is there which is surely a good thing. As the new hard disk is making troubles again, I am using Puppy Linux, booting from a CD each time, which is unconfortable. So I am trying to have Ubuntu installed on a SD Card. After having spent many months on the internet grabing informations anywhere I can and trying several things, I applied this method: http://www.weihermueller.de/mac/ I succeeded in making one SD Card recognizable by the EFI of my laptop (holding alt key @ boot), but nothing installed on it yet as I fear to lose the recognizable-by-EFI part. I haven't succeded in producing the same result on another SD Card. I have a bootable USB key of Ubuntu (yipee) which works like a live CD, made with the help of Universal Linux UDF Creator, found there: http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/ on which I have put Ubuntu 13.04 64bit, retrieved from the official deposits. Eventhough I have to add the "nouveau.noaccel=1" option to the grub command line launching Linux, it works (yipee again) properly as a live cd. When installing Ubuntu I come across the "where do I wanna put Ubuntu" window, I partition another SD Card in: the EFI part (40MB) the Linux part (15GB< <16GB) The installation works fine and finishes with no problem. But at the reboot, the SD Card where Linux is installed is not recognized by the EFI, the icons are : the CD (Puppy Linux), the USB stick (from Linux UDF Creator), the hard drive (the formerly-working Ubuntu 12) but no fourth icon of the SD Card whatsoever. As the title of this thread suggests, I am wondering: why there is a need for EFI to be installed on the SD Card since EFI seems to be on my laptop anyway? why EFI has to be on a different partition than the Linux's one? How do both parts communicate? why the EFI part on the SD Card made with the help of the live-USB key isn't recognized? on the EFI partition, there is a folder named "EFI" which contains another folder named "ubuntu" which contains a file named "grubx64.efi", why is there a thing called grub? Is it the Linux's grub where one can chose either to boot, to boot in safe mode, etc.? Thank you for your patience, looking forward for any kind of answer, Julien

    Read the article

  • C++ Accelerated Massive Parallelism

    - by Daniel Moth
    At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced. You can find the official online announcement at Soma's blog post. Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order… C++ AMP lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design. is part of Visual C++. You don't need to use a different compiler or learn different syntax. is modern C++. Not C or some other derivative. is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization. introduces only one core C++ language extension. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer. Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples… Comments about this post welcome at the original blog.

    Read the article

  • SQL Community – stronger than ever

    - by Rob Farley
    I posted a few hours ago about a reflection of the Summit, but I wanted to write another one for this month’s T-SQL Tuesday, hosted by Chris Yates. In January of this year, Adam Jorgensen and I joked around in a video that was used for the SQL Server 2012 launch. We were asked about SQLFamily, and we said how we were like brothers – how we could drive each other crazy (the look he gave me as I patted his stomach was priceless), but that we’d still look out for each other, just like in a real family. And this is really true. Last week at the PASS Summit, there was a lot going on. I was busy as always, as were many others. People told me their good news, their awful news, and some whinged to me about other people who were driving them crazy. But throughout this, people in the SQL Server community genuinely want the best for each other. I’m sure there are exceptions, but I don’t see much of this. Australians aren’t big on cheering for each other. Neither are the English. I think we see it as an American thing. It could be easy for me to consider that the SQL Community that I see at the PASS Summit is mainly there because it’s a primarily American organisation. But when you speak to people like sponsors, or people involved in several types of communities, you quickly hear that it’s not just about that – that PASS has something special. It goes beyond cheering, it’s a strong desire to see each other succeed. I see MVPs feel disappointed for those people who don’t get awarded. I see Summit speakers concerned for those who missed out on the chance to speak. I see chapter leaders excited about the opportunity to help other chapters. And throughout, I see a gentleness and love for people that you rarely see outside the church (and sadly, many churches don’t have it either). Chris points out that the M-W dictionary defined community as “a unified body of individuals”, and I feel like this is true of the SQL Server community. It goes deeper though. It’s not just unity – and we’re most definitely different to each other – it’s more than that. We all want to see each other grow. We all want to pull ourselves up, to serve each other, and to grow PASS into something more than it is today. In that other post of mine I wrote a bit about Paul White’s experience at his first Summit. His missus wrote to me on Facebook saying that she welled up over it. But that emotion was nothing about what I wrote – it was about the reaction that the SQL Community had had to Paul. Be proud of it, my SQL brothers and sisters, and never lose it.

    Read the article

  • Count unique visitors by group of visited places

    - by Mathieu
    I'm facing the problem of counting the unique visitors of groups of places. Here is the situation: I have visitors that can visit places. For example, that can be internet users visiting web pages, or customers going to restaurants. A visitor can visit as much places as he wishes, and a place can be visited by several visitors. A visitor can come to the same place several times. The places belong to groups. A group can obviously contain several places, and places can belong to several groups. Given that, for each visitor, we can have a list of visited places, how can I have the number of unique visitors per group of places? Example: I have visitors A, B, C and D; and I have places x, y and z. I have these visiting lists: [ A -> [x,x,y,x], B -> [], C -> [z,z], D -> [y,x,x,z] ] Having these number of unique visitors per place is quite easy: [ x -> 2, // A and D visited x y -> 2, // A and D visited y z -> 2 // C and D visited z ] But if I have these groups: [ G1 -> [x,y,z], G2 -> [x,z], G3 -> [x,y] ] How can I have this information? [ G1 -> 3, // A, C and D visited x or y or z G2 -> 3, // A, C and D visited x or z G3 -> 2 // A and D visited x or y ] Additional notes : There are so many places that it is not possible to store information about every possible group; It's not a problem if approximation are made. I don't need 100% precision. Having a fast algorithm that tells me that there were 12345 visits in a group instead of 12543 is better than a slow algorithm telling the exact number. Let's say there can be ~5% deviation. Is there an algorithm or class of algorithms that addresses this type of problem?

    Read the article

  • CMS vs Admin Panel?

    - by Bob
    Okay, so this probably seems like an unusual, more grammar related question, but I was unsure of what to call it. If you use a software such as vBulletin or MyBB or even Blogger and you're the administrator (or other, lesser position such as moderator) of the forum, or publisher/author of the blog, you generally have access to something of an "admin panel". For example, vBulletin's admin panel looks like this and Blogger's admin panel looks something like this. While they both look different and do different things, the goal is fundamentally the same: to provide the user with a method for adding, modifying, or deleting content... to let them control and administrate their forum or blog. Also, they're both made specifically by the company for use in a specific product. Now, there's also options like Drupal. It seems to offer quite a bit more and be quite a bit more generalized. How does something like this work? If you were freelancing, would you deploy a website with Drupal, or would it be something the client might already have installed on their own server? I've never really used Drupal, only heard about it, so please let me know. Also, there seems to be other options like cPanel, a sort of global CMS that allows you to administrate over your entire website. How do those work in comparison to Drupal, or the administrative panels with vBulletin? They seem to serve related, but different purposes. Basically, what is the norm? If I'm developing a web application for a group that needs to be able to edit their website without the need to go into the code or the database (or rather, wants to act in a graphical, easy-to-use content-management/admin panel), would it also be necessary to write my own miniature admin panel? Or would I be able to send them off knowing that they have cPanel? Or could something like Drupal fill this void? Again, I'm a little new to web development, and I'm working on planning out my first, real, large website. So I need a little advice on the standards and expectations for web development - security and coding practices aside, what should I be looking for as far as usability and administration for the client (who will be running the site once I'm done creating the website)? Any extra tips would also be appreciated! Oh, and just a little bit: I'm writing the website using Ruby on the Sinatra framework (both Ruby and Sinatra are things I'm fairly comfortable with) and I'm not being paid to make the website (and I will also be a user, and one of the three administrators of the website) - it's being built for a club I'm in.

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >