Search Results

Search found 12211 results on 489 pages for 'industry standard'.

Page 98/489 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Do most companies not know how to write software?

    - by SnOrfus
    If you're an active reader here, try to think about how many times you've heard (and even agreed) when someone here has told someone else to start looking for a new job. Personally, I've seen it a lot more than I expected: it's almost starting to sound cliche. I get that there are bound to be a number of companies that are bad at developing software or managing a software project, but it almost seems like it's getting worse and more frequent, maybe we're just hearing from them and not all of the places that have decent work atmospheres/conditions. So I ask: In your experience, and through your developer friends do you find that it is common that companies have bad development environments and if so: Why do you think it's common? What do you think could be done to fix it as a developer, as a manager, as an industry? Do you think it's improving?

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Emphasize Some Comments - but not Dirty the Code

    - by Jon Sandness
    I'm having trouble structuring my comments at the moment. I have major sections of the code that, when scrolling through the document, I want to be able to see those stand out. Examples: This is a normal comment: int money = 100; //start out with 100 money - This is a comment to emphasize a certain part of the code: /****** Set up all the money ******/ But I don't like that this isn't very clean. Is there a standard way of setting up this type of a comment?

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • How Are Businesses Advancing with the Experience Revolution?

    - by Charles Knapp
    Businesses worldwide are operating in a new era. Customers are taking charge of their relationships with brands, and the customer experience has become the most important differentiator and driver of business value. Where is the experience heading? And how can businesses take advantage of the customer experience revolution? Find out from the experts at a one-of-a-kind event: the Oracle Customer Experience Summit at Oracle OpenWorld, San Francisco, October 3-5. Our featured speakers are global visionaries including Seth Godin, George Kembel from the Stanford d:School, Bruce Temkin, Kerry Bodine and Paul Hagen from Forrester, and Gene Alvarez from Gartner. Featured industry leaders will include speakers from Athene Group, Bazaarvoice, Comcast, Consortium of Service Innovation, Haworth, Intuit, KPN, Marriott, Nikon, Quicksilver, Royal Caribbean, SapientNitro, Southwest, Stryker, Stuart Concannon, and Twilio. Featured speakers from Oracle will include Oracle President Mark Hurd, Anthony Lye, David Vap, Brian Curran, John Kembel, and Matthew Banks. So, please join us at the Customer Experience Summit at the Oracle OpenWorld Conference.

    Read the article

  • SOA Suite HealthCare Integration Architecture

    - by Nitesh Jain
    Oracle SOA Suite for HealthCare integration is an integrated, best-of-breed suite that helps HealthCare organizations rapidly design and assemble, deploy and manage, highly agile and adaptable business applications.It  will help healthcare industry to  reduce operating costs and speeds time-to-market by delivering a consistent user interface, management console and monitoring environment, as well as healthcare libraries and templates for healthcare customer projects.Oracle SOA Suite for healthcare integration is fully configurable and extensible, providing a highly flexible platform for collaboration across all healthcare domains.Healthcare message standards support:    Messaging standards - HL7, HIPAA, Custom , X12N    Exchange standards - MLLP (v1.0, v2.0), TCP/IP, File, FTP, SFTP, JMSSimplified dashboards and customized reports helps users to advanced monitoring capabilities that support end-to-end healthcare message tracking.A toolkit for rapid HIPAA 5010 upgrade and compliance provides pre-defined healthcare integration mapping for HIPAA standards that is fully customizable and extensible.MLLP-HA helps easily failover and disaster recovery which makes system running on the long time without any issue.Audit keeps track of all the system changes. Alert and notification (SMS,Email etc) helps user to take the fast action and gives tracking on the real-time.

    Read the article

  • Process, Participate, Play: Oracle BPM and SOA at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    Oracle OpenWorld 2012 provides a unique opportunity for BPM and SOA professionals to meet industry leaders and peers, and get insight into the latest product advancements that will help their companies gain a competitive advantage.Via a variety of sessions, hands-on labs, birds-of-a-feather sessions, and demos, attendees will learn how Oracle SOA Suite, Oracle BPM Suite, and Oracle SOA Governance provide a unified and collaborative environment for design and deployment of dynamic business processes. Topics include architecture, integration, implementation, and best practices for on-premises or cloud deployments. Participants will learn how new capabilities of BPM and SOA can help their enterprises gain unprecedented visibility, agility and efficiencies.Maximize the value of attending Oracle Open World by attending sessions that best meet your needs and goals. This exciting series of SOA and BPM sessions is focused on three different audience segments. Business managers or business analysts, click here  IT executives or enterprise architects, click here Developers looking to sharpen their SOA skills, click here To stay in touch with the details and announcements for Oracle BPM Suite and Oracle SOA Suite, check out the BPM and SOA blogs.

    Read the article

  • Partner Webcast – Weblogic for Developers - 12 July 2012

    - by Thanos
    Oracle Weblogic Server is the industry’s leading application server for deploying Java EE applications with support for new features for lowering cost of operations, improving performance and enhancing scalability. But it’s also a great choice for the Java developers because of the differentiating capabilities that facilitate integration with other tools and frameworks, promote reusability and rapid redeployment of your applications. During the webinar we’re going to explore these differentiation features in more detail. Agenda: Java EE standards support in different Weblogic Server versions Weblogic  Classloading How Weblogic load the classes Filtering classloader Shared libraries Classloader Analisys Tool Spring support in Weblogic Weblogic integration with Apache Maven Advanced deployment features Fast Swap Side by Side deployment Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • La fondation Mozilla s'attaque aux critiques de Microsoft contre WebGL et estime que Silverlight 5 s'expose aux mêmes risques

    Mozilla s'attaque aux critiques de Microsoft contre WebGL Et estime que Silverlight s'expose aux mêmes risques Mise à jour du 20/06/2011 Mozilla réagit à la polémique produite par la récente déclaration assassine de Microsoft contre WebGL, le standard Web de production de graphiques 3D accélérée par la carte graphique. Mike Shaver, vice-président de la stratégie technique de la fondation, rejette en bloc les critiques de Microsoft en raison desquelles l'entreprise annonce n'avoir aucune intention d'implémenter WebGL sur Internet Explorer (lire ci-devant pour plus de détails sur l'annonce de Microsoft) Pour...

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Did You Miss It? Replay of the Value Chain Transformation now available.

    - by Stephen Slade
    This very informative webcast on transformation of the value chain is now available for replay. Hear from leading authorities in business,  journalism  and academia on how traditional supply chains have been converted into high performance value chains. Jeff Moad of  Managing Executive chairs this panel of experts including Steve Tungate, VP at Toshiba Business on how they overcome tremendous challenges in a global competitive market in the print industry. Dr. Larry Lapide of MIT discusses Strategic Demand Management from a consulting perspective and Maha Muzumdar, VP of Supply Chain Apps Marketing at Oracle presents the roadmap and tactical approaches that leading firms take. A case study on Sun’s Supply Chain Transformation is highlighted.  For those considering leveraging their supply chain and using it as a strategic tool, this 50 minute webcast will be very informative. link for the webcast:  https://thomaswebinar.webex.com/thomaswebinar/lsr.php?AT=pb&SP=EC&rID=5299632&rKey=10b6e6d17448c78d

    Read the article

  • SF Bay Area Event, November 1st: SPARC 25th Anniversary at Computer History Museum

    - by Larry Wake
    For those of you in the Bay Area, there's going to be what promises to be a very interesting event at the Computer History Museum on Thursday, November 1st at 11 AM: "SPARC at 25: Past, Present and Future". The panel event will feature Sun Microsystems founders Bill Joy and Andy Bechtolsheim, SPARC luminaries such as Anant Agrawal and David Patterson, former Sun VP Bernard Lacroute, plus Oracle executives Mark Hurd, John Fowler and Rick Hetherington. For those of you who can't attend, we expect to have video of the event afterward, but if you can make it in person, this is a unique opportunity to hear from industry pioneers, as well as get insights into future SPARC innovations. Plus, you can see SPARC (and non-SPARC) related exhibits from both the Computer History Museum and the personal collections of some of the panel participants. I hope you can join us; Register today.

    Read the article

  • Unlock the full potential of Oracle Retail with Oracle Retail Consulting

    - by user801960
    In this video, Maria Porretta, Engagement Director, introduces Oracle Retail Consulting which supports Oracle Retail customers by unlocking the potential of the software solutions they are utilising. Oracle Retail Consulting comprises of a global team of over 300 consultants, 70 of which are EMEA based. 90% of the team have a retail background in either IT or business, ensuring true industry expertise and maximum business benefit. Oracle Retail Consulting offers two primary streams of service; design authority which looks at analysis and design to ensure a guided process through to implementation, and delivery ownership which runs throughout the implementation process. Further information is available on our website regarding Oracle Retail Consulting.

    Read the article

  • Software Design for Product Verticals and Service Verticals

    - by Rachel
    In every industry there are two verticals Product Vertical and Service Vertical, so my question is: How does design approach changes while designing Software for Product Vertical as compared to developing Software for Service Vertical ? What are the pros and cons for each case ? Also, in case of Product Vertical, How you go about designing Product or Features and what are steps involved ? Lastly, I was reading How Facebook Ships Code article and it appears that Product Managers have very little influence on how Product is developed and responsibility lies mainly with the Developer for the feature. So is this good practice and why one would go for this approach ? What would be your comment on this kind of approach ?

    Read the article

  • Oracle is #1 in the RDBMS Sector for 2011

    - by jgelhaus
    Gartner 2011 Worldwide RDBMS Market Share Reports 48.8% revenue share for Oracle (*) Gartner has published their market share numbers for 2011 based on total software revenues.  According to Gartner, Oracle: is #1 in worldwide RDBMS software revenue share holds more revenue share than its seven closest competitors combined grew at 18.0%, exceeding both the industry average (16.3%) and the growth rates of its closest competitors. (*) Source: Market Share: All Software Markets, Worldwide 2011 by Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Zhang. - March 29, 2012

    Read the article

  • What's your advise on a potential legal suit? [closed]

    - by ohho
    I [xxx app developer] received an email from Apple that a developer [of yyy app] believes I am "infringing their copyright." Description of Issue: [xxx developer] copied my application (my application is [yyy]) feature by feature. Even their donation model is completely copied from my application. Their first release was significantly later than mine, which implies copying of the application rather than parallel development. I suffered significant financial losses because of their actions, in additional to promotion problems as many people are confused with their product. My advertising was based around the idea of a "free [yyy] application for an iPhone" and they have just taken that as a title for their application. I would appreciate if someone takes a look at their release schedule and compare it to my releases. Additionally, please take a look at their functionality and how it point by point copies the functionality of my older releases. I am asking Apple to remove their application from the App Store, and ban them from resubmitting it. Thank you for your time! [yyy developer], the developer of the [yyy] application. My response was: The code of [xxx] is written by myself, using Apple public API. The graphics elements are designed by myself. The user interface and app control are independently designed and different from other [similar type] apps (please judge yourself). In-app Purchase is iOS Apple standard API. iAd is Apple iOS standard API. I don't think features can be owned exclusively. In fact, my app comes with fewer features, as I prefer minimalist design. I don't think idea can be owned exclusively. Apple responded: Thank you for your response. Unfortunately, Apple cannot serve as arbiter for disputes among third parties. Please contact [yyy developer] directly regarding your actions. You can reach [yyy developer] through: [...]. We look forward to confirmation from both parties that this issue has been resolved. If this issue is not resolved shortly, Apple may be forced to pull your application(s) from the App Store. Then I sent my response above to [yyy developer]. [yyy developer] then asked me "to provide (my) legal address and contact details that (his) lawyer requires to file a copyright infringement suit." IMO, I don't think the [yyy developer]'s claim on "feature by feature" copy is valid. I have fewer features, completely different user interface design. However, I don't think I can afford a legal action for an app of so little financial return. So what's your advise on this? Should I just let Apple pull my app? Or is there any alternative I can consider? FYI ... UI of [xxx app]: and UI of [yyy app]:

    Read the article

  • What technologies are used for Game development now days?

    - by Monika Michael
    Whenever I ask a question about game development in an online forum I always get suggestions like learning line drawing algorithms, bit level image manipulation and video decompression etc. However looking at games like God of War 3, I find it hard to believe that these games could be developed using such low level techniques. The sheer awesomeness of such games defy any comprehensible(for me) programming methodology. Besides the gaming hardware is really a monster now days. So it stands to reason that the developers would work at a higher level of abstraction. What is the latest development methodology in the gaming industry? How is it that a team of 30-35 developers (of which most is management and marketing fluff) able to make such mind boggling games? If the question seems too general could you explain the architecture of God of War 3? Or how you would go about producing a clone? That I think should be objectively answerable.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Cumulative Updates available for SQL Server 2008 SP2 & SP3

    - by AaronBertrand
    Today Microsoft has released two new cumulative updates for SQL Server 2008. Cumulative Update #10 for SQL Server 2008 Service Pack 2 Knowledge Base Article: KB #2696625 At the time of writing, there is one fix listed The build number is 10.00.4332 Cumulative Update #5 for SQL Server 2008 Service Pack 3 Knowledge Base Article: KB #2696626 At the time of writing, there are four fixes listed The build number is 10.00.5785 As usual, I'll post my standard disclaimer here: these updates are NOT for SQL...(read more)

    Read the article

  • Partner Spotlight: Deloitte

    - by kellsey.ruppel
    Deloitte is an Oracle Platinum level partner and has held the highest level of alliance relationship with Oracle for more than a decade. Deloitte has extensive experience implementing Oracle solutions across geographic and organizational boundaries. With more than 45,000 professionals worldwide, Deloitte has helped many Oracle WebCenter customers—including Land O’Lakes, Canadian Partnership Against Cancer, and Panda Security—deploy successful portal, collaboration, and composite application solutions. Deloitte was also the recipient of six Oracle North American Titan Awards for its deep industry experience and breadth of capabilities across Oracle’s stack of application, middleware, and hardware products. Learn more about the Deloitte/Oracle partnership in this brochure. 

    Read the article

  • Top 10 Reasons to Use MySQL and MySQL Cluster as an Embedded Database

    - by Rob Young
    If you are considering using MySQL and/or MySQL Cluster as the embedded database solution for your application, you should join us for today's webcast where we will discuss how you can cut costs, add flexibility and benefit from new performance and scalability enhancements that are now available in MySQL 5.6 and MySQL Cluster 7.2.  We will cover the top 10 reasons that make MySQL and MySQL Cluster the best solutions for embedding in both shrink wrapped and SaaS provided applications, how industry leaders leverage MySQL products and how you can get started with the latest innovations and support offerings across the MySQL product line. You can learn more and reserve your seat here. As always, thanks for your support of MySQL!

    Read the article

  • How could this diagram of the current most lucrative technologies be improved?

    - by Edward Tanguay
    I'm giving a talk in April 2011 on the "Developer English" and showing my non-developer audience, mostly English teachers, various diagrams to explain how developers see their industry etc. One of these diagrams is "Hot Technologies", basically, if you want to become a developer, what technologies should you learn to have the highest chance of (1) getting a job (2) making a good salary, and (3) work with the most exciting technology. This is a draft I made just to get some ideas out, basically C#, PHP, Java are where the bulk of the jobs are. Mobile development has a big future. JavaScript is becoming more and more important, and I want to list "minor technologies" such a Python, Ruby on Rails to the side, I assume e.g. that in general, there are a much smaller percentage of jobs in these technologies as in C#, PHP, Java. How could this diagram be improved?

    Read the article

  • Custommer Centric Wealth Management

    - by michael.seback
    While the world continues to search their way out of the recent financial turmoil and recession, it has no doubt churned out the inherent faults in the wealth management industry and the larger financial system. In order to counter these apprehensions, wealth management firms are now actively seeking and evaluating avenues to re-build the lost trust. They are looking at engaging their customers in managing their investments in a more collaborative and transparent manner. At the same time, wealth managers are also seeking to empower themselves with complete and comprehensive customer information in order to provide the best advice and the best solution at the right time. Read your copy of this new global White Paper on Wealth Management.

    Read the article

  • Information Rights Management 11g Release Highlights

    - by andy.peet
    Broader Enterprise Reach Built on Fusion Middleware and Java EE Broad platform certifications Standard 27 Oracle languages SSO authentication: OAM, Windows auth, Basic auth to LDAP Extensible, First-Class Security Extensible classification model for application integrations FIPS 140-2 certification Hardware Security Module for key storage Usability and Templates New Web-based management console Best practice rights model: global roles and templates For more information see the new information available on OTN, including the Developer Area and whitepaper, and of course the IRM Blog.

    Read the article

  • Good abbreviations for XML ... things

    - by Peter Turner
    I've never been very good at maintaining a coherent bunch of variable names for interfacing with XML files because I never name the variables in my interfaces the same way across my source. There are Elements, Attributes, Documents, NodeLists, Nodes, DocumentFragments and other stuff. What's a good scheme for keeping track of this stuff as variables? Is there a standard in regard to Hungarian notation? Do you even put anything signifying that the data is actually XML, is this bad practice?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >