Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 275/916 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Additional new material WebLogic Community

    - by JuergenKress
    Oracle Cloud Application Foundation 12c Helps Customers Deliver Next-Generation Applications on a Mission-Critical Cloud Platform In a recent online event, Oracle and industry speakers introduced Oracle Cloud Application Foundation 12c, including Oracle WebLogic 12.1.2 and Oracle Coherence 12.1.2.  Read More Team Spotlight: Mike Lehmann, Vice President of Product Management Meet the team behind Oracle Fusion Middleware. In this edition, we speak to Mike Lehmann, Oracle’s vice president of product management for Oracle Cloud Application Foundation, Oracle WebLogic Server, Oracle Coherence, Java Cloud Services, and Java Platform, Enterprise Edition. Read More New and Free: Learn Oracle Application Development Framework Mobile Online at Your Convenience Are you ready to go mobile? Check out this new tutorial from Oracle’s ADF Academy - Developing Applications with Oracle Application Development Framework Mobile. New: Oracle JDeveloper 12c and Oracle Application Development Framework 12c Announcing Oracle JDeveloper 12c and Oracle Application Development Framework 12c. New capabilities include HTML5, better Maven support, Git support, new Oracle ADF Faces components, improved REST support, Enterprise JavaBeans/Java Persistence API, and the latest support for Oracle WebLogic Server 12.1.2. Get more details and download. New: Oracle Enterprise Pack for Eclipse 12c The best Eclipse-based tools for Oracle WebLogic and Oracle Coherence continue to get better. Check out the latest Oracle WebLogic and Oracle Coherence support, improved Oracle Application Development Framework support, Maven, and more. Register: Oracle WebLogic Devcast Series Join us for the upcoming Oracle WebLogic Devcast webcast. Oracle GlassFish Server 3.1.2 and 2.1.1 updates  & An Overview of JSON-P & Comprehensive Free Java EE 6 Video Tutorial! Java ME Embedded 3.3 and Java ME Software Development Kit (SDK) 3.3 Now Available - Optimized for microcontrollers and other resource-constrained devices, this release reduces "core plumbing" for an app, and includes more information about memory and network usage critical for low-power apps. JDK 8 Early Access Releases now available JDK 8 Early Access Developer Documentation - Get the latest documentation changes to the Java Developer Guides and the Java Tutorials - Blog NetBeans IDE 7.4 Beta - This release extends HTML5 features to Java EE and PHP application development, introduces new support for Hybrid HTML5 development on Android and iOS platforms, and preview support for JDK 8. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • FileStream and FileTable in SQL Server 2012

    SQL Server 2012 enhanced the SQL Server 2008 FileStream data type by introducing FileTable, which lets an application integrate its storage and data management components to allow non-transactional access, and provide integrated SQL Server services. Arshad Ali explains how. Top 5 hard-earned lessons of a DBAIn part one, read about ‘The Case of the Missing Index’ and learn from the experience of The DBA Team. Read now.

    Read the article

  • does ubuntu 11.10 is support ns2.29

    - by nasser
    I need your help, I'm a beginner on NS2 , and I'm trying to install ns2.29 on ubuntu 11.10 32bits but i can't. This message appear and installation stopped : Build tcl8.4.11 ============================================================ loading cache ./config.cache checking whether to use symlinks for manpages... no checking whether to compress the manpages... no checking whether to add a package name suffix for the manpages... no checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking for building with threads... no (default) checking if the compiler understands -pipe... yes checking how to run the C preprocessor... gcc -pipe -E checking for sin... no checking for main in -lieee... yes checking for main in -linet... no checking for net/errno.h... no checking for connect... yes checking for gethostbyname... yes checking how to build libraries... static checking for ranlib... ranlib checking if 64bit support is requested... no checking if 64bit Sparc VIS support is requested... no checking system version (for dynamic loading)... ./configure: 1: Syntax error: Unterminated quoted string tcl8.3.2 configuration failed! Exiting ... Tcl is not part of the ns project. Please see www.Scriptics.com to see if they have a fix for your platform. Anyone can help me?

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

  • Web Application Development - The Innovative Idea Helping Customers

    Web application development helps in building websites over platform that guarantee client's business enhancement and elevates its operational excellence. Web application development is highly popular and it is used amongst across the globe. It is the professional web design team that studies client's requirements and brings out an innovative idea that will assist clients business.

    Read the article

  • Games at Work Part 2: Gamification and Enterprise Applications

    - by ultan o'broin
    Gamification and Enterprise Applications In part 1 of this article, we explored why people are motivated to play games so much. Now, let's think about what that means for Oracle applications user experience. (Even the coffee is gamified. Acknowledgement @noelruane. Check out the Guardian article Dublin's Frothing with Tech Fever. Game development is big business in Ireland too.) Applying game dynamics (gamification) effectively in the enterprise applications space to reflect business objectives is now a hot user experience topic. Consider, for example, how such dynamics could solve applications users’ problems such as: Becoming familiar or expert with an application or process Building loyalty, customer satisfaction, and branding relationships Collaborating effectively and populating content in the community Completing tasks or solving problems on time Encouraging teamwork to achieve goals Improving data accuracy and completeness of entry Locating and managing the correct resources or information Managing changes and exceptions Setting and reaching targets, quotas, or objectives Games’ Incentives, Motivation, and Behavior I asked Julian Orr, Senior Usability Engineer, in the Oracle Fusion Applications CRM User Experience (UX) team for his thoughts on what potential gamification might offer Oracle Fusion Applications. Julian pointed to the powerful incentives offered by games as the starting place: “The biggest potential for gamification in enterprise apps is as an intrinsic motivator. Mechanisms include fun, social interaction, teamwork, primal wiring, adrenaline, financial, closed-loop feedback, locus of control, flow state, and so on. But we need to know what works best for a given work situation.” For example, in CRM service applications, we might look at the motivations of typical service applications users (see figure 1) and then determine how we can 'gamify' these motivations with techniques to optimize the desired work behavior for the role (see figure 2). Description of Figure 1 Description of Figure 2 Involving Our Users Online game players are skilled collaborators as well as problem solvers. Erika Webb (@erikanollwebb), Oracle Fusion Applications UX Manager, has run gamification events for Oracle, including one on collaboration and gamification in Oracle online communities that involved Oracle customers and partners. Read more... However, let’s be clear: gamifying a user interface that’s poorly designed is merely putting the lipstick of gamification on the pig of work. Gamification cannot replace good design and killer content based on understanding how applications users really work and what motivates them. So, Let the Games Begin! Gamification has tremendous potential for the enterprise application user experience. The Oracle Fusion Applications UX team is innovating fast and hard in this area, researching with our users how gamification can make work more satisfying and enterprises more productive. If you’re interested in knowing more about our gamification research, sign up for more information or check out how your company can get involved through the Oracle Usability Advisory Board. Your thoughts? Find those comments.

    Read the article

  • Is event sourcing ready for prime time?

    - by Dakotah North
    Event Sourcing was popularized by LMAX as a means to provide speed, performance scalability, transparent persistence and transparent live mirroring. Before being rebranded as Event Sourcing, this type of architectural pattern was known as System Prevalence but yet I was never familiar with this pattern before the LMAX team went public. Has this pattern proved itself in numerous production systems and therefore even conservative individuals should feel empowered to embrace this pattern or is event sourcing / system prevalence an exotic pattern that is best left for the fearless?

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • SQL Server 2012 Service Pack 1 CTP4 is available

    - by AaronBertrand
    This morning the SQL Server team announced the release of Service Pack 1 CTP4 for SQL Server 2012. Back in July I talked about CTP3 and how the release contained BI features only; no fixes. The newer CTP does have fixes and other engine enhancements as well; there is even proper documentation in Books Online about the enhancements. The download page also lists them: http://www.microsoft.com/en-us/download/details.aspx?id=34700 The build # is 11.0.2845....(read more)

    Read the article

  • Next Generation Directory @ Oracle Open World

    - by Etienne Remillon
    Oracle OpenWorld 2012 is bigger, better, and more educational than ever before, and identity management activities are no exception. For all identity related activities check this entry, or this handy PDF. Do you focus more specifically on directory?Come and meet with the directory team at: Our session: Next Generation Directory: Oracle Unified Directory / session #CON946 / Tuesday Oct 2 5:00 pm / Moscone West L3, Room 3008 Our demo pod: Oracle Directory Services Plus: Performant, Cloud-Ready demo / Moscone South, Right - S-222 Demonstration Hours @ Moscone South: Mon 10:00 - 6:00 / Tues 09:45 - 6:00 / Wed 09:45 – 4:00

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

  • ADF Mobile @ Oracle Open World 2012 - A Look Back...

    - by Joe Huang
    Hi, everyone: It's been a little over two weeks since the end of Oracle Open World 2012, and hope everyone has recovered sufficiently.  We have seen a tremendous amount of coverage on Oracle ADF Mobile during this Oracle Open World.  For starters, ADF Mobile demo booth was positioned in the Oracle Red Lounge in Moscone North, where all new and innovative technologies are being demonstrated.  The booth is liternally out front and the first booth in the area, and we had a lot of interested attendees talking to us.  It feels like ADF Mobile has finally arrived on the big stage. There are numerous sessions and hands on labs that covers ADF Mobile.  Details can be found in Oracle Open World page.   The Oracle Cloud: Oracle's Cloud Platofrm and Application Strategy by Thomas Kurian (Keynote) Near the beginning of the keynote, showing a great analytics application built using ADF Mobile  Oracle Fusion Middleware Strategies Driving Business Innovation by Hasan Rizvi (Keynote) The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud by Chris Tonas (General Session) Co-presented with Accenture, an ADF Mobile Beta Partner Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology (General Session) Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies (General Session) Building Mobile Applications with Oracle Cloud (General Session) Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF (Conference Session) Co-presented with Infosys, an ADF Mobile Beta Partner Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code (Oracle Develop Session) Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite (Conference Session) Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile (Hands on Lab) This lab was repeated 8 (!) times Build Mobile Applications for Oracle E-business Suite (Hands on Lab) It was an extremely busy Open World for the team, and we were in the middle of trying to release ADF Mobile!   By far, the most memorable event during Open World was the ADF Meett Up at the OTN Lounge, where beers were flowing (for a little while) and familiar names are finally matched with faces.  We also appreciate the opportunity to interview the attendees from New Caledonia - sorry we probably surprised you with the video record, and many thanks for coming through for us. I also want to thank my fellow ADF Mobile and Fusion Middleware team members - from product managers, engineers, and product marketing, everyone worked extremely hard to make this Open World a great success for ADF Mobile. I really enjoyed meeting everyone at Oracle Open World, at the booth, sessions, etc.   Now it's on to release ADF Mobile - for real! Thanks, Joe Huang PS: If this thread shows up on your RSS feed, please keep watching...

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Launching Agile PLM 9.3.3!

    - by Shane Goodwin
    Ten months ago we announced the availability of Agile PLM 9.3.2. Today I have the great pleasure to announce availability of Agile PLM 9.3.3 and AutoVue for Agile PLM 20.2.2 - both are immediately available on Oracle Software Delivery Cloud. In this same timeframe our team has also published Oracle PLM Mobile 1.0, EC MCAD 3.1, and EC MCAD 3.2. Agile PLM 9.3.3 focuses on improving management business processes, improving management of intellectual property, and overall product improvements based on customer feedback. In this short timeframe, we have made very significant progress on all three fronts. The Agile PLM 9.3.3 What’s New Whitepaper discusses all of the new capabilities. Looking forward, we will continue to deliver new releases with laser focus on solving real business problems and making users more productive. With our release of Innovation Management, you will be seeing dramatic new capability to help manage the innovation funnel and the processes to determine what product projects to fund. You will also see us continue this accelerated cadence in releasing new features for Agile PLM. All Agile PLM 9.3.3 Documentation is now available, including an initial version of the Capacity Planning Guide (CPG). As usual, we will be updating the CPG in a few months when we complete our performance and breakpoint testing. Like with other recent Agile PLM versions, the Product Management team has recorded Transfer of Information (TOI) sessions to educate you about the new features. The TOI sessions can be accessed in My Oracle Support on note 1589164.1. As with all other releases, we have also published new versions (1.7.5) of Averify (Patch ID 17583605) and AUT (Patch ID 17583592) in My Oracle Support. Again this year I look forward to seeing many of you at the Oracle Value Chain Summit (February 3-5, San Jose, CA), to talk more about this new release and all of the fascinating ways our customers and partners are driving business value with Agile PLM. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Microsoft Releases TFS Scrum process template

    - by Matt deClercq
    Microsoft has announced and released a Beta version of a new process template for Team Foundation Server 2010 that is based purely on Scrum Methodology and terms. You can download the new template from the link below TFS Scrum v1.0 BetaFor a more in depth look review see the Visual Studio Magazine announcement HERE

    Read the article

  • How to create and administer multi-architecture PPAs?

    - by maxschlepzig
    I have a program that needs to be recompiled for every ubuntu version. Currently I am packaging it using Ubuntu's PPA just for the current distribution. Eventually, I have to provide packages for the previous ubuntu version. I am not sure how to accomplish this. How does the Ubuntu PPA build server works - does it just look at the distribution field in the most current changelog entry (in the debian/changelog file) to determine for what distribution the package should be build? The debian specification allows to add multiple distributions into the distribution field. But this does not seam to help me. Some ubuntu documents talk about encoding the distribution name into the version number (in the debian changelog file). But how does this work in practice? A new version of the program is available, then what? Do I add for each distribution a new changelog entry and the PPA buildserver builds automatically for each distribution new packages after dput'ing it up? Or does the PPA buildserver just looks at the first changelog entry?

    Read the article

  • Can anyone tell me how to get the same Gnome desktop environment as the one in the photo?

    - by Elysium
    I have been using gnome fallback for more than a year, but recently I have come across this image: However, when in virtual machine (new copy of ubuntu 12.04) I change to gnome 3 shell, the desktop is not even similar to the one in the photo above. I am wondering if there are others things that I am missing/have to do....to get to the same thing (or similar) that you can see in the image above. NOTE: Here is a screenshot from the virtual machine after I used these commands: sudo add-apt-repository ppa:gnome3-team/gnome3 sudo apt-get update sudo apt-get install gnome-shell

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Business Choices and Evony

    - by Robert May
    Recently, I’ve been playing a game called Evony, and I finally decided to quit the game and thought I should warn others who might be tempted.  I also find a lot of insight with this game as an example.  A few of the companies that I’ve worked with or worked for have been like this and they are NOT good places to be. Evony is a joke designed to milk as much money out of people as possible.  As a professional software developer who mentors teams on how to build better software, here's what I see: They obviously offshore all development and have little oversight over that offshore development, and they probably have a small team at that.  Evidenced by the poor grammar throughout the game. They're seeking to maximize revenue and pushing to do as little development as possible, which would mean a small team. They're horribly understaffed in the customer support department as evidenced by never replying to this forum and never responding to bug reports or help requests (I've had one open with no response AT ALL for over a month . . .) They have way inadequate testing, no CI, and probably no automated unit tests.  You can see this by the poor grammar throughout the game and the type of bugs that show up. They aren't following a formal development process (no Agile, Waterfall, or anything else) as evidenced by their lack of predictable release cycle and lack of visibility. I'm guessing that the internal code base is terrible, otherwise, there wouldn't be an "Age II" that had nothing more than a new visual interface and a few rule tweaks.  This is also evidenced by the itty bitty scope of bug fixes and their inability to really fix bugs. Their Architect sucks.  Really, 42k user is all you can handle on a single server?  Could you REALLY not come up with a better way to scale to handle users?  They've built isolated worlds, instead of a single continuous world. Back to milking people for money--to really progress, you have to spend money. All of this adds up to knowing, deliberate actions on the part of management.  They CHOOSE to do this (like AOL choosing to send more discs instead of improve quality). So, what can we learn? This game will never really improve, since the bosses don't care, they're only in it for the money. The game will never have good support.  Again, the owners don't care. Giving them money only perpetuates this scam (and yes, I've given them money, way too much money. :() They don't care if you quit.  There's a new sucker born every day. Don't EVER go to work for them.  I've worked both with and for people like this and the culture is NEVER good. Ah well. Technorati Tags: Evony

    Read the article

  • Hyperic HQ says the server is down, but it is not!

    - by Diego Jancic
    Hi, I've been using HQ for a couple months now, and everything worked fine. But since yesterday all resources go down for a couple hours, and then everything restores to normal, and then go down again without doing anything. The server of course is working, the HQ server and agent are both working, and the IPs were not modified. I've tried to re-run the setup in the HQ agent, and it did not change anything. Agent is in Windows 2008, and Server is in Windows 2003. I'm using HQ Version 4.1.2 (build #1053 - May 06, 2009 - Release Build) Any hint? Thanks! Update: I guess (although I'm not sure) it stopped working when the disk on the server went full, with 0 bytes of free space. Of course I've freed more than 15gbs and restarted the HQ server/database.

    Read the article

  • Installing PHP-GTK with PHP 5.3 on OS X

    - by Shabbyrobe
    I'm having trouble getting php-gtk installed with php 5.3 on os x. I'm currently using macports to do it and when I try to install php-gtk, it spews 'duplicate static' errors: Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_php_php5-gtk/work/php-gtk-2.0.1" && /usr/bin/make -j2 all " returned error 2 Command output: ext/gtk+/gen_pango.c:2951: error: duplicate 'static' ext/gtk+/gen_pango.c:2957: error: duplicate 'static' ext/gtk+/gen_pango.c:3097: error: duplicate 'static' ext/gtk+/gen_pango.c:3103: error: duplicate 'static' Is there a way to coerce it into building, or an alternative way to install it?

    Read the article

  • What is the easiest way to get MySQL's Archive Storage Engine working on CentOS 5.4

    - by tronda
    The Archive Storage Engine is not enabled by the default build of MySQL in CentOS/RHEL. I would like to enable it on our CentOS 5.4 server. My initial reaction was to modify the SPEC file for the SRPMS file, but this indicates that this might not be that easy. There's always the option to build from MySQL source, but I would prefer if possible to stay within the RPMS/Yum world. Does anybody have a successful approach to this by using RPMS/SRPMS/Yum? Some patches which makes this work flawless with SRPMS?

    Read the article

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >