Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 338/471 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • New channels for Exadata 11.2.3.1.1

    - by Rene Kundersma
    With the release of Exadata 11.2.3.1.0 back in April 2012 Oracle has deprecated the minimal pack for the Exadata Database Servers (compute nodes). From that release the Linux Database Server updates will be done using ULN and YUM. For the 11.2.3.1.0 release the ULN exadata_dbserver_11.2.3.1.0_x86_64_base channel was made available and Exadata operators could subscribe their system to it via linux.oracle.com. With the new 11.2.3.1.1 release two additional channels are added: a 'latest' channel (exadata_dbserver_11.2_x86_64_latest) a 'patch' channel (exadata_dbserver_11.2_x86_64_patch) The patch channel has the new or updated packages updated in 11.2.3.1.1 from the base channel. The latest channel has all the packages from 11.2.3.1.0 base and patch channels combined.  From here there are three possible situations a Database Server can be in before it can be updated to 11.2.3.1.1: Database Server is on Exadata release < 11.2.3.1.0 Database Server is patched to 11.2.3.1.0 Database Server is freshly imaged to 11.2.3.1.0 In order to bring a Database Server to 11.2.3.1.1 for all three cases the same approach for updating can be used (using YUM), but there are some minor differences: For Database Servers on a release < 11.2.3.1.0 the following high-level steps need to be performed: Subscribe to el5_x86_64_addons, ol5_x86_64_latest and  exadata_dbserver_11.2_x86_64_latest Create local repository Point Database Server to the local repository* install the update * during this process a one-time action needs to be done (details in the README) For Database Servers patched to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local repository Point Database Server to the local repository Update the system For Database Servers freshly imaged to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local  repository Point Database Server to the local repository Update the system The difference between 'situation 2' (Database Server is patched to 11.2.3.1.0) and 'situation 3' (Database Server is freshly imaged to 11.2.3.1.0) is that in situation 2 the existing Exadata-computenode.repo file needs to be edited while in situation 3 this file is not existing  and needs to be created or copied. Another difference is that you will end up with more OFA packages installed in situation 2. This is because none are removed during the updating process.  The YUM update functionality with the new channels is a great enhancements to the Database Server update procedure. As usual, the updates can be done in a rolling fashion so no database service downtime is required.  For detailed and up-to-date instructions always see the patch README's 1466459.1 patch 13998727 888828.1 Rene Kundersma

    Read the article

  • Book review: Peopleware: Productive Projects and Teams

    - by DigiMortal
       Peopleware by Tom DeMarco and Timothy Lister is golden classic book that can be considered as mandatory reading for software project managers, team leads, higher level management and board members of software companies. If you make decisions about people then you cannot miss this book. If you are already good on managing developers then this book can make you even better – you will learn new stuff about successful development teams for sure. Why peopleware? Peopleware gives you very good hints about how to build up working environment for project teams where people can really do their work. Book also covers team building topics that are also important reading. As software developer I found practically all points in this book to be accurate and valid. Many times I have found my self thinking about same things and Peopleware made me more confident about my opinions. Peopleware covers also time management and planning topics that help you do way better job on using developers time effectively by minimizing the amount of interruptions by phone calls, pointless meetings and i-want-to-know-what-are-you-doing-right-now questions by managers who doesn’t write code anyway. I think if you follow suggestions given by Peopleware your developers are very happy. I suggest you to also read another great book – Death March by Edward Yourdon. Death March describes you effectively what happens when good advices given by Peopleware are totally ignored or worse yet – people are treated exactly opposite way. I consider also Death March as golden classics and I strongly recommend you to read this book too. Table of Contents Acknowledgments Preface to the Second Edition Preface to the First Edition Part 1: Managing the Human Resource Chapter 1: Somewhere Today, a Project Is Failing Chapter 2: Make a Cheeseburger, Sell a Cheeseburger Chapter 3: Vienna Waits for You Chapter 4: Quality-If Time Permits Chapter 5: Parkinson's Law Revisited Chapter 6: Laetrile Part II: The Office Environment Chapter 7: The Furniture Police Chapter 8: "You Never Get Anything Done Around Here Between 9 and 5" Chapter 9: Saving Money on Space Intermezzo: Productivity Measurement and Unidentified Flying Objects Chapter 10: Brain Time Versus Body Time Chapter 11: The Telephone Chapter 12: Bring Back the Door Chapter 13: Taking Umbrella Steps Part III: The Right People Chapter 14: The Hornblower Factor Chapter 15: Hiring a Juggler Chapter 16: Happy to Be Here Chapter 17: The Self-Healing System Part IV: Growing Productive Teams Chapter 18: The Whole Is Greater Than the Sum of the Parts Chapter 19: The Black Team Chapter 20: Teamicide Chapter 21: A Spaghetti Dinner Chapter 22: Open Kimono Chapter 23: Chemistry for Team Formation Part V: It't Supposed to Be Fun to Work Here Chapter 24: Chaos and Order Chapter 25: Free Electrons Chapter 26: Holgar Dansk Part VI: Son of Peopleware Chapter 27: Teamicide, Revisited Chapter 28: Competition Chapter 29: Process Improvement Programs Chapter 30: Making Change Possible Chapter 31: Human Capital Chapter 32:Organizational Learning Chapter 33: The Ultimate Management Sin Is Chapter 34: The Making of Community Notes Bibliography Index About the Authors

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • NVidia with Optimus conflicting in Ubuntu 12.04

    - by Humannoise
    i have recently installed Ubuntu 12.04 in a Intel Ivy Bridge with integrated graphics and NVidia GPU with Optimus tech, however i cant manage it to work properly. I have already passed by the solution of bumblebee project, however iam got the following message when try to run anything with nvidia card( e.g. with optirun firefox): [ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect. [ERROR]Could not connect to bumblebee daemon - is it running? Since the nvidia card is not working properly, some softwares like Scilab, that make use of X11 system for graphic handling and plotting, wont work too. my bios has no option concerning graphics card and the log of daemon returned: Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[980]: Module 'nvidia' is not found. Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943272] init: bumblebeed main process (980) terminated with status 1 Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943288] init: bumblebeed main process ended, respawning Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[1026]: Module 'nvidia' is not found. The lspci -nn | grep '\[030[02]\]:' returned: 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) Ok, for the command dpkg -l | grep '^ii' | grep nvidia i got : ii bumblebee-nvidia 3.0-2~preciseppa1 nVidia Optimus support using the proprietary NVIDIA driver ii nvidia-current 302.17-0ubuntu1~precise~xup1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-current-updates 295.49-0ubuntu0.1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-settings 302.17-0ubuntu1~precise~xup3 Tool of configuring the NVIDIA graphics driver ii nvidia-settings-updates 295.33-0ubuntu1 Tool of configuring the NVIDIA graphics driver After full reinstallation, including the remove of any previous nvidia drive, lsmod | grep -E 'nvidia|nouveau' returned: nvidia 10888310 46 dmesg | grep -C3 -E 'nouveau|NVRM' returned things like: [ 1875.607283] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1875.607289] nvidia 0000:01:00.0: setting latency timer to 64 [ 1875.607293] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=none [ 1875.607363] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 302.17 Tue Jun 12 16:03:22 PDT 2012 [ 1884.830035] nvidia 0000:01:00.0: PCI INT A disabled [ 1884.832058] bbswitch: disabling discrete graphics [ 1884.832960] bbswitch: Result of Optimus _DSM call: 09000019 Some programs, like Scilab, are now working fine under optirun(e.g. >optirun scilab) call. Thank you.

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • XAML RadControls Q1 2010 Official

    Q1 2010 release focuses on strengthening 3 main aspects of RadControls for Silverlight and RadControls for WPF: Ensuring first-class performance for all data-centric controls through various techniques, Enhancing and polishing RadControls themes Providing highly advanced, enterprise-level features, especially for the data visualization controls  We know that performance is crucial for line-of-business applications. Therefore, we always make sure that RadControls can help you achieve unmatched performance and this has always been our number one priority. RadControls achieve unbeatable performance through UI and Data Virtualization, Data Sampling and built-in Load On Demand features. Several of the major controls in the bundles have been enhanced with UI virtualization support Scheduler, CoverFlow and Book. As a part of Q1 2010 we also want to bring an unparalleled visual richness to your applications. To achieve that we have done a major rework of all our themes. We used a uniform templating approach across all controls, streamlined naming conventions for resources and delivered a much more consistent look of the controls along the way. RadControls for WPF bundle has been enriched with two new controls Map and Book.   Another new control has been included in the Q1 2010 release. However, it continues to be in a CTP stage. This is the Transition control. We decided that this is the better way to proceed as we will need some more input from our community on how exactly to develop this control further. Therefore, we will be regularly blogging on the development progress so that we can clearly indicate the direction, in which the control is evolving and gather your feedback on whether this is the best direction. Our Charting controls for Silverilght and WPF have been advanced with major new features such as Data Sampling, Zooming and Scrolling, Automatic SmartLables positioning, Sorting and Filtering and many more. The new built-in paging of the GridView control now allows you to page through your data, thus resulting in an event faster and more responsive grid that can easily handle enormously large datasets. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to debug lag using Bluetooth connected mouse and A2DP headset?

    - by gertvdijk
    I own a Logitech M555b mouse (since a week) for use with my HP Elitebook 8570w laptop running Kubuntu 12.04. Works fine right after connecting using the KDE Bluetooth control module. However, after some time (seemingly random), it starts to lag. Movements are being delayed for roughly 500ms for a short period of time. Usually it recovers after some time too, but it can take minutes. All actions are being delayed: movements, click, scrolls. Additionally, the movements can be choppy during these times. A workaround that always works for the same short period of time is to disconnect an re-connect the mouse. This can be done using the same KDE Bluetooth control module. What did I try already? Running this at boot time: echo on > `readlink -f /sys/class/bluetooth/hci0`/../../../power/level To disable any power saving features on the Bluetooth hci0 device. Check the mouse's batteries (it's just a week old, other new batteries: same result) Checking logs and kernel messages about Bluetooth-related entries: none aside the expected messages on connect time. I'm running kernel 3.5.0-13-generic as provided in the xorg-edgers PPA. Booting the regular 3.2 Precise kernel results in the same behaviour. Some other information that may help: It happens when no other Bluetooth connections are active on the machine. Similar symptoms also occur on my Bluetooth stereo (A2DP) headset, but then it's audio lagging and skipping. Swapping Bluetooth profiles as described here then helps. Conclusion: it's not the mouse that's faulty. The headset always worked fine using my now dead Thinkpad T61p with built-in Bluetooth. The bluetooth module in my laptop is connected via USB and shows up as Bus 002 Device 003: ID 0a5c:21e1 Broadcom Corp. I'm mobile and several people around me are using Bluetooth at work (A2DP mostly). It also occurs at home, where my neighbours are probably using Bluetooth as well. It could just be radio interference, but I think Bluetooth connections should just hop to another channel. And, moreover, it just works properly instantly when re-connecting. Therefore I think it's a software driver issue and I'd like to debug it. Is there any way to get more verbose logging on the Bluetooth(-hid) modules?

    Read the article

  • How To Deal With Terrible Design Decisions

    - by splatto
    I'm a consultant at one company. There is another consultant who is a year older than me and has been here 3 months longer than I have, and a full time developer. The full-time developer is great. My concern is that I see the consultant making absolutely terrible design decisions. For example, M:M relationships are being stored in the database as a comma-delimited string rather than using a conjunction table to hold the relationships. For example, consider two tables, Car and Property: Car records: Camry Volvo Mercedes Property records: Spare Tire Satellite Radio Ipod Support Standard Rather than making a table CarProperties to represent this, he has made a "Property" attribute on the Car table whose data looks like "1,3,7,13,19,25," I hate how this decision and others are affecting the quality of my code. We have butted heads over this design three times in the past two months since I've been here. He asked me why my suggestion was better, and I responded that our database would be eliminating redundant data by converting to a higher normal form. I explained that this design flaw in particular is discussed and discouraged in entry level college programs, and he responded with a shot at me saying that these comma-separated-value database properties are taught when you do your masters (which neither of us have). Needless to say, he became very upset and demanded I apologize for criticizing his work, which I did in the interest of not wanting to be the consultant to create office drama. Our project manager is focused on delivering a product ASAP and is a very strong personality - Suggesting to him at this point that we spend some time to do this right will set him off. There is a strong likelihood that both of our contracts will be extended to work on a second project coming up. How will I be able to exert dominant influence over the design of the system and the data model to ensure that such terrible mistakes are not repeated in the next project? A glimpse at the dynamics: I can be a strong personality if I don't measure myself. The other consultant is not a strong personality, is a poor communicator, is quite stubborn and thinks he is better than everyone else. The project manager is an extremely strong personality who is focused on releasing tomorrow's product yesterday. The full-time developer is very laid back and easy going, a very effective communicator, but is someone who will accept bad design if it means not rocking the boat. Code reviews or anything else that takes "time" will be out of the question - there is no way our PM will be sold on such a thing by anybody.

    Read the article

  • CIO's Corner: Achieving a Balance

    - by Michelle Kimihira
    Author: Rick Beers Senior Director, Product Management, Oracle Fusion Middleware All too often, a CIO is unfairly characterized as either technology-focused or business-focused; as more concerned with either infrastructure performance or business excellence. It seems to me that this completely misses the point. I have long thought that a CIO has probably the most complex C-level position in an enterprise, one that requires an artful balance among four entirely different constituencies, often with competing values and needs. How a CIO balances these is the single largest determinant of success. I was reminded of this while reading the excellent interview of Mark Hurd by CNBC’s Maria Bartiromo in a recent issue of USATODAY (Bartiromo: Oracle's Hurd is in tech sweet spot). The interview covers topics such as Big Data, Leadership and Oracle’s growth strategy. But the topic that really got my interest, and reminded me of the need for balance, was on IT spending trends, in which Mark Hurd observed, “…budgets are tight. What most of our customers have today is both an austerity plan to save money and at the same time a plan to reapply that money to innovation. There isn't a customer we have that doesn't have an austerity plan and an innovation plan.” In an era of economic uncertainty, and an accelerating pace of business change, this is probably the toughest balance a CIO must achieve. Yet for far too many IT organizations, operating costs consume over 75% of their budgets, leaving precious little for innovation and investment in business-critical technology programs. I have found that many CIO’s are trapped by their enterprise systems platforms, which were originally architected for Standardization, Compliance and tightly integrated linear Workflows. Yes, these traits are still required for specific reasons and cannot be compromised. But they are no longer enough. New demands are emerging: the explosion in the volume and diversity of Data, the Consumerization of IT, the rise of Social Media, and the need for continual Business Process Reengineering. These were simply not the design criteria for Enterprise 1.0 and attempting to leverage them with current systems platforms results in an escalation in complexity and a resulting increase in operating costs for many IT organizations. This is the cost vs investment trap and what most constrains CIO’s from achieving the balance they need. But there is a way out of this trap. Enterprise 2.0 represents an entirely new enterprise systems architecture, one that is ‘Business-Centric’ rather than ‘ERP Centric’, which defined the architecture of Enterprise 1.0. Oracle’s best in class suite of Fusion Middleware Products enables a layered approach to enterprise systems architectures that provides the balance that an enterprise needs. The most exciting part of all this? The bottom two layers are focused upon reducing costs and the upper two layers provide business value and innovation. Finally, the Balance a CIO needs.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • How do I import my first sprites?

    - by steven_desu
    Continuing from this question (new question - now unrelated) So I have a thorough background in programming already (algorithms, math, logic, graphing problems, etc.) however I've never attempted to code a game before. In fact, I've never had anything more than minimal input from a user during the execution of a program. Generally input was given from a file or passed through console, all necessary functions were performed, then the program terminated with an output. I decided to try and get in on the world of game development. From several posts I've seen around gamedev.stackexchange.com XNA seems to be a favorite, and it was recommended to me when I asked where to start. I've downloaded and installed Visual Studio 2010 along with the XNA Framework and now I can't seem to get moving in the right direction. I started out looking on Google for "xna game studio tutorial", "xna game development beginners", "my first xna game", etc. I found lots of crap. The official "Introduction to Game Studio 4.0" gave me this (plus my own train of thought happily pasted on top thanks to MSPaint): http://tinypic.com/r/2w1sgvq/7 The "Get Additional Help" link (my best guess, since there was no "Continue" or "Next" link) lead me to this page: http://tinypic.com/r/2qa0dgx/7 I tried every page. The forum was the only thing that seemed helpful, however searching for "beginner", "newbie", "getting started", "first project", and similar on the forums turned up many threads with specific questions that are a bit above my level ("beginner to collision detection", for instance) Disappointed I returned to the XNA Game Studio home page. Surely their own website would have some introduction, tutorial, or at least a useful link to a community. EVERYTHING on their website was about coding Windows Phone 7.... Everything. http://tinypic.com/r/10eit8i/7 http://tinypic.com/r/120m9gl/7 Giving up on any official documentation after a while, I went back to Google. I managed to locate www.xnadevelopment.com. The website is built around XNA Game Studio 3.0, but how different can 3.0 be from 4.0?.... Apparently different enough. http://tinypic.com/r/5d8mk9/7 http://tinypic.com/r/25hflli/7 Figuring that this was the correct folder, I right-clicked.... http://tinypic.com/r/24o94yu/7 Hmm... maybe by "Add Content Reference" they mean "Add a reference to an existing file (content)"? Let's try it (after all- it's my only option) http://tinypic.com/r/2417eqt/7 At this point I gave up. I'm back. My original goal in my last question was to create a keyboard-navigable 3D world (no physics necessary, no logic or real game necessary). After my recent failures my goal has been revised. I want to display an image on the screen. Hopefully in time I'll be able to move it with the keyboard.

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • Solita Oy Achieves Oracle PartnerNetwork Specialization

    - by michaela.seika(at)oracle.com
    Helsinki, February 2, 2011 - Solita Oy, a member of the Oracle® PartnerNetwork (OPN), is the first Finnish enterprise to achieve OPN Specialized status for customer-specific systems integration and software solutions.To achieve a Specialized status, Oracle partners are required to meet a stringent set of requirements that are based on the needs and priorities of the customer and partner community. By achieving a Specialized distinction, Solita Oy has been recognized by Oracle for its expertise in customer-specific systems integration and software solutions, achieved through competency development and demonstrated by the company's business results and proven success in implementing customer projects. "Solita and Oracle have cooperated for a long time, and we have been an Oracle partner for many years. We believe that the renewed partner program and the new partnership level that we have achieved will open up new opportunities for a closer collaboration with Oracle. Our increased focus on systems integration solutions and the stepping up of our specialized knowledge of SOA will enable us to provide even better solutions for our customers," said Jari Niska, Chief Executive Officer, Solita Oy. "Solita has shown trust and belief in Oracle's technology and in the business opportunities arising with it. They have contributed to building our cooperation in a consistent and systematic way. Achieving a Specialized status in our partner program is a natural further step in our close and committed cooperation. It strengthens our trust in our ability to be able to increase both turnover and profitability together," said Juha Kaskirinne, Alliances and Channel Leader, Oracle Finland Oy.  About Oracle PartnerNetwork Oracle PartnerNetwork (OPN) Specialized is the latest version of Oracle's partner program that provides partners with tools to better develop, sell and implement Oracle solutions. OPN Specialized offers resources to train and support specialized knowledge of Oracle products and solutions and has evolved to recognize Oracle's growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to differentiate through Specializations. Specializations are achieved through competency development, business results, expertise and proven success. To find out more visit http://www.oracle.com/partners or connect with the Oracle Partner community at OPN on Twitter, OPN on Facebook, OPN on LinkedIn, and OPN on YouTube. About Solita Oy Solita Oy is a Finnish company dedicated to developing demanding information system solutions and IT professional services. Solita's customers include prominent Finnish companies and public organizations. Solita's turnover in 2010 was about 17 million euros. The company was founded in 1996 and has over 170 employees. Further information: www.solita.fiFurther information Jari Niska, CEO, Solita Oy, tel. +358 40 524 6400, [email protected] Kaskirinne, A&C Leader Finland, Oracle Finland Oy, tel. +358 40 506 3592, [email protected]

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

  • How do you manage extensibility in your multi-tenant systems?

    - by Brian MacKay
    I've got a few big web based multi-tenant products now, and very soon I can see that there will be a lot of customizations that are tenant specific. An extra field here or there, maybe an extra page or some extra logic in the middle of a workflow - that sort of thing. Some of these customizations can be rolled into the core product, and that's great. Some of them are highly specific and would get in everyone else's way. I have a few ideas in mind for managing this, but none of them seem to scale well. The obvious solution is to introduce a ton of client-level settings, allowing various 'features' to be enabled on per-client basis. The downside with that, of course, is massive complexity and clutter. You could introduce a truly huge number of settings, and over time various types of logic (presentation, business) could get way out of hand. Then there's the problem of client-specific fields, which begs for something cleaner than just adding a bunch of nullable fields to the existing tables. So what are people doing to manage this? Force.com seems to be the master of extensibility; obviously they've created a platform from the ground up that is super extensible. You can add on to almost anything with their web-based UI. FogBugz did something similiar where they created a robust plugin model that, come to think of it, might have actually been inspired by Force. I know they spent a lot of time and money on it and if I'm not mistaken the intention was to actually use it internally for future product development. Sounds like the kind of thing I could be tempted to build but probably shouldn't. :) Is a massive investment in pluggable architecture the only way to go? How are you managing these problems, and what kind of results are you seeing? EDIT: It does look as though FogBugz handled the problem by building a fairly robust platform and then using that to put together their screens. To extend it you create a DLL containing classes that implement interfaces like ISearchScreenGridColumn, and that becomes a module. I'm sure it was tremendously expensive to build considering that they have a large of devs and they worked on it for months, plus their surface area is perhaps 5% of the size of my application. Right now I am seriously wondering if Force.com is the right way to handle this. And I am a hard core ASP.Net guy, so this is a strange position to find myself in.

    Read the article

  • Using machine learning to aim mirrors in a solar array?

    - by Buttons840
    I've been thinking about solar collectors where several independent mirrors to focus the light on a solar collector, similar to the following design from Energy Innovations. Because there will be flaws in the assembly of this solar array, I am proceeding with the following assumptions (or lack thereof): The software knows the "position" of each mirror, but doesn't know how this position relates to the real world or to other mirrors. This will account for poor mirror calibration or other environmental factors which may effect one mirror but not the others. If a mirror moves 10 units in one direction, and then 10 units in the opposite direction, it will end up where it originally started. I would like to use machine learning to position the mirrors correctly and focus the light on the collector. I expect I would approach this as an optimization problem, optimizing the mirror positions to maximize the heat inside the collector and the power output. The problem is finding a small target in a noisy high-dimensional space (considering each mirror has 2 axis of rotation). Some of the problems I anticipate are: cloudy days, even if you stumble upon the perfect mirror alignment, it might be cloudy at the time noisy sensor data the sun is a moving target, it moves along a path, and follows a different path every day - although you could calculate the exact position of the sun at any time, you wouldn't know how that position relates to your mirrors My question isn't about the solar array, but possible machine learning techniques that would help in this "small target in a noisy high dimensional-space" problem. I mentioned the solar array because it was the catalyst for this question and a good example. What machine learning techniques can find such a small target in a noisy high-dimensional space? EDIT: A few additional thoughts: Yes, you can calculate the suns position in the real world, but you don't know how the mirrors position is related to the real world (unless you've learned it somehow). You might know the suns azimuth is 220 degrees, and the suns elevation is 60 degrees, and you might know a mirror is at position (-20, 42); now tell me, is that mirror correctly aligned with the sun? You don't know. Lets assume you have some very sophisticated heat measurements, and you know "with this heat level, there must be 2 mirrors correctly aligned". Now the question is, which two mirrors (out of 25 or more) are correctly aligned? One solution I considered was to approximate the correct "alignment function" using a neural network which would take the suns azimuth and elevation as input and output a large array with 2 values for each mirror which correspond to the 2 axis of each mirror. I'm not sure what the best training method is though.

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Detect, Analyze, Act – Fast!

    - by Ajay Khanna
    In fast changing business environment, it becomes crucial to identify business opportunities and business issues as soon as possible. If identified at the right time, business managers can address issues before they escalate to serious problems and can take advantage of the new opportunities before the competition does. Moreover, they have to be efficient to do this at the right cost. Success depends on how responsive organization is to emerging events and changing environment. These events can be customer issues, competition moves, changes in regulations, or changes in company policies. In order to be responsive in such situations, organizations need to first identify and track these situations. They can do that via business activity monitoring (BAM) and complex event processing (CEP). A unified monitoring dashboard helps put together a comprehensive picture of the situation in hand and provides deep insight to take proper actions. With CEP, businesses can connect all the relevant events, detect event patterns and take immediate actions using Business Process Management system.   So to be responsive we need: Real-Time Visibility with Business Activity Monitoring You can use BAM technology to monitor progress, track performance, meet service-level agreements (SLAs), manage exceptions, and issue alerts to an employee or application when a process is not functioning properly—all in real time. A unified monitoring dashboard helps you maintain a complete picture of each situation so you can take action effectively. BAM works hand in hand with BPM software to discover the significant activities that drive business success.   Real-Time Sense and Respond An event-driven BPM solution enables each step in a business process to be informed not only by the previous step, but also by any other step, data, and pattern of behavior deemed relevant to that step. This gives the company the ability to “sense and respond.” You can describe interesting event patterns and event correlations and monitor the business in real-time. Whenever a pre-defined pattern emerges you can take actions like raising alerts, notifications, or kicking off another business process. This synergy possible by integrating activity monitoring, event processing, and BPM makes it possible for managers to keep a finger on the pulse of their business. Business managers can now respond to customers faster, respond to competition faster, reduce fraud and do more cross-selling. Read more about being responsive in the whitepaper “The Instantly Responsive Enterprise: Integrating BPM and Complex Event Processing” in BPM Resource Kit.

    Read the article

  • How does rc job work / order of (contradicting) "start on ..." and "stop on ..." stanzas

    - by Binarus
    Hi, I just can't understand how Upstart's rc job definition in Natty 11.04 works. To illustrate the problem, here is the definition (empty lines and comments are left out): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL Let's suppose we currently are in runlevel 2 and the rc job is stopped (that is exactly the situation after booting my box and logging in via SSH). Now, let's assume that the system switches to runlevel 3, for example due to a command like "telinit 3" given by root. What will happen to the rc job? Obviously, the rc job will be started since it is currently stopped and the event runlevel 3 is matching the start events. But from now on, things are unclear to me: According to the manual $RUNLEVEL evaluates to the new runlevel when the job is started (that means 3 in our example). Therefore, the next stanza "stop on runlevel [!$RUNLEVEL]" translates to "stop on runlevel [!3]"; that means we have a first stanza which will trigger the job, but the second stanza will never stop the job and seems to be useless. Since I know that the Ubuntu / Upstart people won't do useless things, I must be heavily misunderstanding something. I would be grateful for any explanation. While trying to understand this, an additional question came to my mind. If I had contradicting start and stop triggers, for example start on foo stop on foo what would happen? I swear I never will do that, but I am nevertheless very interested in how Upstart handles that on the theoretical level. Thank you very much! Editing the question as a reaction on geekosaur's first answer: I can see the parallelism, but it is not that easy (at least, not to me). Let's assume the job aurrently is still running, and a new runlevel event comes in (of course, the new runlevel is different from the current one). Then, the following should happen: 1) The job is single instance. That means that "start on ..." won't be triggered since the job is currently running; $RUNLEVEL is not touched. 2) "stop on ..." will be triggered since the new runlevel is different from $RUNLEVEL, so the job will be aborted. 3) Now, the job is stopped and waiting. I can't see how it is restarted with the new runlevel. AFAIK, initctl emits events only once, so "start on ..." won't be triggered and the new runlevel won't be entered. I know that I still misunderstanding something, and I am grateful for explanations. Thank you very much!

    Read the article

  • Developer – Cross-Platform: Fact or Fiction?

    - by Pinal Dave
    This is a guest blog post by Jeff McVeigh. Jeff McVeigh is the general manager of Performance Client and Visual Computing within Intel’s Developer Products Division. His team is responsible for the development and delivery of leading software products for performance-centric application developers spanning Android*, Windows*, and OS* X operating systems. During his 17-year career at Intel, Jeff has held various technical and management positions in the fields of media, graphics, and validation. He also served as the technical assistant to Intel’s CTO. He holds 20 patents and a Ph.D. in electrical and computer engineering from Carnegie Mellon University. It’s not a homogenous world. We all know it. I have a Windows* desktop, a MacBook Air*, an Android phone, and my kids are 100% Apple. We used to have 2.5 kids, now we have 2.5 devices. And we all agree that diversity is great, unless you’re a developer trying to prioritize the limited hours in the day. Then it’s a series of trade-offs. Do we become brand loyalists for Google or Apple or Microsoft? Do we specialize on phones and tablets or still consider the 300M+ PC shipments a year when we make our decisions on where to spend our time and resources? We weigh the platform options, monetization opportunities, APIs, and distribution models. Too often, I see developers choose one platform, or write to the lowest common denominator, which limits their reach and market success. But who wants to be ?me too”? Cross-platform coding is possible in some environments, for some applications, for some level of innovation—but it’s not all-inclusive, yet. There are some tricks of the trade to develop cross-platform, including using languages and environments that ?run everywhere.” HTML5 is today’s answer for web-enabled platforms. However, it’s not a panacea, especially if your app requires the ultimate performance or native UI look and feel. There are other cross-platform frameworks that address the presentation layer of your application. But for those apps that have a preponderance of native code (e.g., highly-tuned C/C++ loops), there aren’t tons of solutions today to help with code reuse across these platforms using consistent tools and libraries. As we move forward with interim solutions, they’ll improve and become more robust, based, in no small part, on our input. What’s your answer to the cross-platform challenge? Are you fully invested in HTML5 now? What are your barriers? What’s your vision to navigate the cross-platform landscape?  Here is the link where you can head next and learn more about how to answer the questions I have asked: https://software.intel.com/en-us Republished with permission from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Intel

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • CodePlex Daily Summary for Thursday, September 20, 2012

    CodePlex Daily Summary for Thursday, September 20, 2012Popular ReleasesSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.2020.421): New features: Disable a specific part of SiteMap to keep the data without displaying them in the CRM application. It simply comments XML part of the sitemap (thanks to rboyers for this feature request) Right click an item and click on "Disable" to disable it Items disabled are greyed and a suffix "- disabled" is added Right click an item and click on "Enable" to enable it Refresh list of web resources in the web resources pickerWPF Animated GIF: WPF Animated GIF 1.2.1: Bug fixes 1275: fixed rendering issues when DisposalMethod = 2 or 3AJAX Control Toolkit: September 2012 Release: AJAX Control Toolkit Release Notes - September 2012 Release Version 60919September 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.1.0: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Sense/Net CMS - Enterprise Content Management: SenseNet 6.1.2 Community Edition: Sense/Net 6.1.2 Community EditionMain new featuresOur current release brings a lot of bugfixes, including the resolution of js/css editing cache issues, xlsx file handling from Office, expense claim demo workspace fixes and much more. Besides fixes 6.1.2 introduces workflow start options and other minor features like a reusable Reject client button for approval scenarios and resource editor enhancements. We have also fixed an issue with our install package to bring you a flawless installation...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...Python Tools for Visual Studio: 1.5 RC: PTVS 1.5RC Available! We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RC. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. The primary new feature for the 1.5 release is Django including Azure support! The http://www.djangoproject.com is a pop...Launchbar: Lanchbar 4.0.0: First public release.AssaultCube Reloaded: 2.5.4 -: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: New logo Improved airstrike! Reset nukes...Extended WPF Toolkit: Extended WPF Toolkit - 1.7.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's new in the 1.7.0 Release?New controls Zoombox Pie New features / bug fixes PropertyGrid.ShowTitle property added to allow showing/hiding the PropertyGrid title. Modifications to the PropertyGrid.EditorDefinitions collection will now automatically be applied to the PropertyGrid. Modifications to the PropertyGrid.PropertyDefinitions collection will now be reflected automaticaly...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2 For detailed release notes check the release notes. JayData core: all async operations now support promises JayDa...????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testEmmaClient - Liveresults for Orienteering: EmmaClient 2012-09-13: Minor release with a small fix for producing OS2012 results (and status of runners in the forest)Multiple Image choice custom field type: MultipleImageUpload V1.0: This is the Custom field type which allows the users to choose image as a choice field. This custom field type is SharePoint 2010, install the WSP thru powershell or Stsadm tool and enjoy the functionality...MDS Administration: Version 1.1.3: Fixed Rename issueNew Projects3dxia: bug3dxiaBitbucket Issue Tracker: A simple issue-tracking Windows client for your projects hosted on bitbucket.org.C++ thread-safe logging: Visual Studio C++ log library project: add to your project for thread-safe logging capabilities.Caddies GeoNote: The work started from making a vision for a neighbourhood communication platform, and ended up in creating the version 1.0 of a mobile application – GeoNotes – CodePlexGitHookForAzure: TestCommerce Server Pipeline Log Analyzer: This tool read and analyze pipeline logs under one selected folder. It applies to Microsoft Commerce Server 2002, 2007, 2009 and 2009 R2 Pipeline logs.Contrib.Mod.ResetPassword: Send reset link as a shapeContrib.Taxonomies.ViewExtension: Orchard module that adds a filter box to the taxonomies selector.EasierRdp: This is a remote desktop session management tool which provides an easy way to maintain multiple users and servers' connectionEconomic news grabber: WCF service for get news from rss, news sites and etc. WPF client for presentation this data for end users.Eticaret Sitesi: eee ticFacebook Graph API SDK Helper Class Library: Facebook C# Graph API SDK Helper Class Under developmentfxch01v14: helloKarned 2: Karned est un carnet de pêche informatique. Ce logiciel permet de noter vos prises de pêche à des fins d'analyse, ou simplement pour le souvenir...lixotrash: SandBox and POCs collections, not interesting hereLoggerLib: The project is a "Tracing Library" developed in a Borland C++ enviroment. Il progetto consiste in una libreria di tracciamento, sviluppata in ambiente Borland.LyncTalker: A simple tray application which will speak incoming Lync instant messages.MicroFrameWork: MicroFrameWorkNuzzle: 2.6.5 Dofus EmulatorPDF Merge: PDF Merge is a simple user-friendly application that allows you to merge multiple PDF documents including scanned / imported documents and images into 1 PDF.Pipeline: A library of several lightweight pipeline implementations ("pipes and filters" pattern).Prime Calculator: PrimeCalculator factorizes a number or a math expression into its prime factors or if prime display its prime type [Unit, Prime, Additive, Pure].Racing: not ready yetRuntime DataSet/DataTable viewer: This component basically allows you to inspect the contents of any Data Set or a Data Table at runtime without breaking into the debugger again and again.Service billing: Student group work for the College of West Anglia UCWA. Snake!: A Snake game written in C#SoccerBot: this is just a test projectSQL Server Trace File Import Utility: Command-line utility to import trace files into a data warehouse type structure. Currently it only handles Login events.testscenairo7onv14: helloToQueryString: Serialize any object in C# to a query string with the .ToQueryString() extension method. Supports primitives, strings, arrays and collections.tyajz: tyajz projectWindows Azure Table Storage: you can find all the details in my blog: hhaggan.wordpress.com and if you do have any question or inquiries feel free to contact me at hhaggan@hotmail.comwtcms: wtcms

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >