Search Results

Search found 1760 results on 71 pages for 'modern c'.

Page 50/71 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What tools exist for assessing an organisation's development capability?

    - by Eric Smith
    I have a bit of a challenge at work at the moment. Presently (and in fact, for some time now), we have been experiencing the following problems with some in-house maintained applications: Defects (sometimes quite serious) being released into production; The Customer (that is, the relevant business unit) perpetually changing their minds (or appearing to do so) about what issue to work on next; A situation where everyone seems to be in a "fire-fighting" mode a lot of the time; Development staff responding to operational requests from business users; ("operational" here means something that needs to be done in order to continue with business, or perhaps just to make a business user's life a little less painful, as opposed to fixing a bug in the application, or enhancing the application); Now I'm sure this doesn't sound particularly new or surprising to most of the participants on this Q&A site and no prizes for identifying the "usual suspects" when it comes to root causes. My challenge is that I have to persuade the higher-ups to do uncomfortable things in order to address all of this. The folk I need to persuade come from a mixture of the following two cultures: Accounting; IT Infrastructure. I have therefore opted for a strategy that draws from things with-which folk from such a culture would be most comfortable (at least, in my estimation), namely: numbers and tangibles. Of course modern development practitioners know all too well that this sort of thing isn't easily solved using an analytical mindset (some would argue that that mindset is, in fact, entirely inappropriate). Never-the-less, this is the dichotomy with-which I am faced, so that's the stake that I've put in the ground. I would like to be able to do research and use the outputs to present findings in the form of metrics and measures. I am finding it quite difficult, though, to find an agreed-upon methodology and set of templates for assessing an organisations development capability--the only thing that seems applicable is the Software Engineering Institute's Capability Maturity Model. The latter, however, seems dated and even then rather vague. So, the question is: Do any tools or methodologies (free or commercial) exist that would assist me in completing this assessment?

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • C++ Accelerated Massive Parallelism

    - by Daniel Moth
    At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced. You can find the official online announcement at Soma's blog post. Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order… C++ AMP lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design. is part of Visual C++. You don't need to use a different compiler or learn different syntax. is modern C++. Not C or some other derivative. is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization. introduces only one core C++ language extension. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer. Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples… Comments about this post welcome at the original blog.

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • It is CX a new concept?

    - by Isabel F. Peñuelas
    The Marketing Industry and the Web Industry are talking about CX since some time. However it is only very recently that the concept has reached some common meaning accepted by the analysts’ and the IT community. The new CX model depends on two previous facts: the expansion of the social media, and the impact of the new advanced features of mobile devices regarding brand-customer interaction. CXsers vs UXers First there is some need of disambiguity between User Experience and Customer Experience. User Experience -UX, is a much well established concept related with the design of user interactions for particular devices. UX people are interested on multiple touch points of digital interfaces while CX people are interested on all kind of interfaces including physical ones. UX is an evolution of Web Usability, while CX is a marketing concept. UX is an instrument of User Experience. CX in fact is all about Connections and Interactions. Connections Dan Draper, the creative director Mad Men, understands very well that to market effectively means to connect with people, and the best way to connect to people is to use the connections people have with other people: understanding Social Media connections and taking the customer pulse of customers on those medias, and are strong facilitators of CX strategies.  Interactions We can very simply define CX as the relationship that a customer establishes with a brand through multiple touch points (interactions, channels) through the entire life cycle of his relationship- direct or indirect with the brand. Interactions can be grouped on Customer Journeys through multiple touch points defined as the path a customer follows to achieve a goal. Processes A customer journey today usually starts at the moment he surfs the Web, then he takes a purchase decision; purchases the product;  request a particular service and finally recommends or do not recommends the product.  Customer Journeys are processes, and to analyze customer journeys there exists today a broad offering of modern Customer Journey tools very similar actually to the use cases or UML activity diagrams for IT systems design. As a summary CX is nothing more and nothing less than applying process analysis methods for better understanding how to create value through customer interactions across the multiple user´s touch points with the brand.

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • SOA, Cloud & Service Technology Symposium 2012 London

    - by JuergenKress
    Registration Is Now Open With Special Pricing For Oracle Promotional Discount For Exclusive Oracle Discount, Enter Promo Code: Djmxz370 OVERVIEW The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. Keynotes and Speakers Thomas Erl Arcitura Education "SOA, Cloud Computing & Semantic Web Technology: The Sequel - The Era of Intelligent Service Technology" Markus Zirn Oracle "Big Data with CEP and SOA" Clemens Utschig Boehringer Ingelheim Pharma Manas Deb Oracle "The Successful Execution of the SOA and BPM Vision Tim E. Hall Oracle "Community Management: The Next Wave of SOA Governance and API Management" Registration is Now Open with Special Pricing for Oracle Promotional discount for exclusive Oracle discount, Enter Promo Code: DJMXZ370. SPONSORSHIP OPPORTUNITIES The Symposium provides an excellent opportunity to promote your organization in the lead-up to the event, to delegates during the Symposium, and after when the proceedings are made available on the Symposium web site. There are a limited number of premier sponsorship packages available, and a package can be tailored to your needs and budget. Download the Symposium Sponsorship Guide. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Service Technology Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • WebCenter Customer Spotlight: American Home Mortgage

    - by Michelle Kimihira
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Solution Summary American Home Mortgage Servicing Inc. (AHMSI) is a 3,000 employee company based in Coppell, Texas and provides services to homeowners and loan investors. With a multibillion portfolio under management, AHMSI is one of the country's largest servicers of Alt-A and subprime loans. AHMSI implemented a public-facing secure Web portal using Oracle WebCenter Suite to help investors make informed decisions more quickly and automated much of the investor approval process AHMSI reduced the time needed to process loan modification from approximately 30 days to one week.  UsingOracle WebCenter Content AHMSI can now share strategic & sensitive content in compliance with the various governance regulations.  Company OverviewAmerican Home Mortgage Servicing Inc. provides services to homeowners and loan investors. Whether a borrower holds a traditional, Alt-A, payment option, or subprime loan, the company's highly trained experts are committed to providing high levels of service as they work to address each customer's needs. AHMSI also carefully manages the loan portfolios of investors. With a multibillion portfolio under management, AHMSI is one of the country's largest servicers of Alt-A and subprime loans.  Challenges AHMSI’s biggest challenge was to improve security by minimizing the use of e-mail and FTP sites to share sensitive mortgage loan data with third parties, including estate investors.  Solutions AHMSI implemented Oracle WebCenter Suite to deploy a public-facing Web portal, enabling authorized external users to view content stored on the content server and Oracle WebCenter Content  to create a secure storage area for daily, weekly, and monthly reports. They leveraged the standard group spaces in Oracle WebCenter Portal to enable business users to collaborate more effectively.  Results By automating much of the investor approval process, they reduced the time needed to process loan modifications from approximately 30 days to one week and greatly minimized the use of e-mail and FTP sites to share information. Investors can now view supporting materials including real-time loan information and call center data to help them make more informed decisions more quickly.  The implemented solution complies with various government regulations in dealings with real estate investors.  “To maintain our commitment to providing customers with the highest possible levels of services while creating a competitive advantage for our business, we needed to be able to share strategic and sensitive content in a safe and secure manner. With Oracle WebCenter, we have a flexible and modern user experience platform that allows us to securely, reliably and efficiently manage our portfolio of sensitive data and share it with our business partners. This not only helps ensure compliance with various government regulations, it accelerates processes and supports more informed decision making.” Vince Holt, Manager, Application Management, American Home Mortgage Servicing, Inc.  Additional Information AHMSI Customer Snapshot  Oracle WebCenter Suite Oracle WebCenter Content Oracle WebCenter Portal Oracle Fusion Middleware

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • Wrong statistics in AUX_STATS$ might puzzle the optimizer

    - by Mike Dietrich
    We do recommend the creation of System Statistics for quite a long time. Since Oracle 9i the optimizer works with a CPU and IO cost based model. And in order to give the optimizer some knowledge about the IO subsystem's performance and throughput - once System Statistics are collected - they'll get stored in AUX_STATS$. For this purpose in the old Oracle 9i days some default values had been defined - and you'll still find those defaults in Oracle Database 11g Release 2 in AUX_STATS$. But these old values don't reflect the performance of modern IO systems. So it might be a good best practice post upgrade to create fresh System Statistics if you haven't done this before.  You can collect System Statistics with: exec DBMS_STATS.GATHER_SYSTEM_STATS('start'); and end it later by executing: exec DBMS_STATS.GATHER_SYSTEM_STATS('stop'); You could also run DBMS_STATS.GATHER_SYSTEM_STATS('interval', interval=>N) instead where N is the number of minutes when statistics gathering is stopped automatically. Please make sure you'll do this on a real workload period. It won't make sense to gather these values while the database is in an idle state. You should do this ideally for several hours. It doesn't affect performance in a negative way as the values are anyway collected in V$SYSSTAT and V$SESSTAT. And in case you'd like to delete the stats and revert to the old default values you'd simply execute:exec DBMS_STATS.DELETE_SYSTEM_STATS; The tricky thing in Oracle Database 11.2 - and that's why I'm actually writing this blog post today - is bug9842771. This leads to wrong values in AUX_STATS$ for SREADTIM and MREADTIM by factor 1000 guiding the optimizer sometimes into the totally wrong directon. The workaround is to overwrite these values manually and divide them by 1000. Use the DBMS_STATS.SET_SYSTEM_STATS procedure. See this MOS Note:9842771.8 for the above bug for some further information. This issue is fixed in Oracle Database 11.2.0.3 and above. To get some background information about the statistics collected in please read this section in the Oracle Database 11.2 Performance Tuning Guide. And gathering System Statistics might have some implication if you have mixed workloads - and interacts with DB_FILE_MULTIBLOCK_READ_COUNT. For more information please read section 13.4.1.2.

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • It's Here! Visual Studio 2010 and ASP.NET 4.0 Ship

    Today Microsoft released Visual Studio 2010 and ASP.NET 4.0. I've been using the RC version of Visual Studio 2010 quite a bit for the past couple of months and have really grown to like it. It has a host of features and enhancements that improve developer productivity, from improved IntelliSense to better multiple monitor support. Plus there's something about the user experience that, to me, makes it feel better than Visual Studio 2008. I don't know if it's the new blue color motif or what, but the IDE seems more modern looking and more responsive to my mouse movements and other input. Anyway, if you've not yet downloaded Visual Studio 2010 and ASP.NET 4.0, why not? As with previous versions of Visual Studio there's a free Express Edition and VS2010 and ASP.NET 4.0 runs side-by-side with earlier versions of Visual Studio and ASP.NET. And with Visual Studio 2010's multi-targeting you can even use VS2010 as your development editor for ASP.NET 2.0 and ASP.NET 3.5 web applications. (Although be forewarned if you have multiple developers working on the application that the project files in VS2010 and earlier versions of Visual Studio differ.) This week's article on 4Guys explores my favorite new features of Visual Studio 2010. Here's an excerpt: The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on. This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! And, in closing, here are some helpful VS2010 and ASP.NET 4.0 links: One click installation for ASP.NET 4.0, Visual Web Developer 2010, .NET Framework 4.0, and ASP.NET MVC 2 Eight Quick Hit videos showing some of the cool new VS2010 features VS2010 and ASP.NET 4.0 Release Announcement with some great info/links from none other than Scott Guthrie Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there a pedagogical game engine?

    - by K.G.
    I'm looking for a book, website, or other resource that gives modern 3D game engines the same treatment as Operating Systems: Design and Implementation gave operating systems. I have read Jason Gregory's Game Engine Architecture, which I enjoyed. However, by intent the author treated components of the architecture as atomic units, whereas what I'm interested in is the plumbing between those units that makes a coherent whole out of ideally loosely coupled parts. In books such as these, one usually reads that "that's academic," but that's the point! I have also read Julian Gold's Object-oriented Game Development, which likewise was good, but I feel is beginning to show its age. Since even mobile platforms these days are multicore and have fast video memory, those kinds of things (concurrency, display item buffering) would ideally be covered. There are other resources, such as the Doom 3 source code, which is highly instructive for its being a shipped product. The problem with those is as follows: float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the f***? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } To wit, while brilliant, this kind of source requires more enlightenment than I can usually muster upon first read. In summary, here's my white whale: For an adult reader with experience in programming. I wish I could save all the trees killed by every. Single. Game Programming book ever devoting the first two chapters to "Now just what is a variable anyway?" In C or C++, very preferably C++. Languages that are more concise are fantastic for teaching, except for when what you want to learn is how to cope with a verbose language. There is also the benefit of the guardrails that C++ doesn't provide, such as garbage collection. Platform agnostic. I'm sincerely afraid that this book is out there and it's Visual C++/DirectX oriented. I'm a Linux guy, and I'd do what it takes, but I would very much like to be able to use OpenGL. Thanks for everything! Before anyone gets on my case about it, Fast inverse square root was from Quake III Arena, not Doom 3!

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • Partner Webcast – Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise - August 28th 2014

    - by JuergenKress
    Thursday August 28th 2014 SOA Suite 12c Webcast The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges: » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Join this webcast to get an overview of what is in Java 8 from a business perspective and how with Java 8, you are uniquely positioned to extend innovation in your solutions through the largest, open, standards-based, community-driven platform. Oracle SOA Suite 12c Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through A unified toolset for the development of services and composite applications. A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments. Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Interent of Things (IoT) Summary - Q&A For details please visit our registration page here. Thursday, Aug 28th 2014 10am CET  (9am GMT / 11am EEST SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: SOA Suite 12c,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress,SOA

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Driving Growth through Smarter Selling

    - by Samantha.Y. Ma
    With the proliferation of social media and mobile technologies, the world of selling and buying has drastically changed, as buyers now have access to more information than they did in the past. In fact, studies have shown that buyers complete 60 percent of the buying process before they even engage with a salesperson. The old models of selling no longer work effectively; and the new way of selling is driven by customer insights. To succeed, sales need to be proactive, not reactive. They need to engage with the customer early, sometimes even before the customer’s needs are fully understood. In fact, the best sales reps prescribe a solution that the customer doesn't even know they need, often by leveraging social media to listen, engage and collaborate with peers. And they fully tap into the power of analytics and data to drive results.  Let’s look at some stats regarding challenges facing sales today. According to recent studies, sales reps spend 78 percent of their time doing administrative things -- such as planning, searching for information, data entry -- and only 22 percent of the time actually selling. Furthermore, 40 percent of B2B sales reps miss their quota, and only 3 percent of companies can say with confidence that their forecasts are “always accurate.” How do you drive growth in this modern day and age? It's not just getting your sales teams to work harder; it's helping them work smarter and providing them with a solution they want to use, on the device(s) they already know, giving them critical insights and tools to be more productive, increase win rates, and close deals faster. Oracle Sales Cloud was designed to do exactly that. It enables smarter selling that allows reps to sell more, managers to know more, and companies to grow more.  Let’s face it—if all CRM solutions worked well, sales executives wouldn’t be having the same headaches as they had in the past. Join Oracle’s Thomas Kurian and Doug Clemmans on Tuesday, October 22 as they explain: • How today’s sales processes have rendered many CRM systems obsolete • The secrets to smarter selling, leveraging mobile, social, and big data • How Oracle Sales Cloud enables smarter selling—as proven by Oracle and its customers Take the first step down the path toward smarter selling. With Oracle Sales Cloud, reps sell more, managers know more, and companies grow more.

    Read the article

  • ITT Corporation Goes Live on Oracle Sales and Marketing Cloud Service (Fusion CRM)!

    - by Richard Lefebvre
    Back in Q2 of FY12, a division of ITT invited Oracle to demo our CRM On Demand product while the group was considering Salesforce.com. Chris Porter, our Oracle Direct sales representative learned the players and their needs and began to develop relationships. We lost that deal, but not Chris's persistence. A few months passed and Chris called on the ITT Shape Cutting Division's Director of Sales to see how things were going. Chris was told that the plan was for the division to buy more Salesforce.com. In fact, he informed Chris that he had just sent his team to Salesforce.com training. During the conversation, Chris mentioned that our new Oracle Sales Cloud Service could run with Outlook. This caused the ITT Sales Director to reconsider the plan to move forward with our competition. Oracle was invited back to demo the Oracle Sales and Marketing Cloud Service (Fusion CRM) and after it concluded, the Director stated, "That just blew your competition away." The deal closed on June 5th , 2012 Our Oracle Platinum Partner, Intelenex, began the implementation with ITT on July 30th. We are happy to report that on September 18th, the ITT Shape Cutting Division successfully went live on Oracle Sales and Marketing Cloud Service (Fusion CRM). About: ITT is a diversified leading manufacturer of highly engineered critical components and customized technology solutions for growing industrial end-markets in energy infrastructure, electronics, aerospace and transportation. Building on its heritage of innovation, ITT partners with its customers to deliver enduring solutions to the key industries that underpin our modern way of life. Founded in 1920, ITT is headquartered in White Plains, NY, with 8,500 employees in more than 30 countries and sales in more than 125 countries. The ITT Shape Cutting Division provides plasma lasers and controls with the Burny, Kaliburn, and AMC brands. Oracle Fusion Products: Oracle Sales and Marketing Cloud Service (Fusion CRM) including: • Fusion CRM Base • Fusion Sales Cloud • Fusion Mobile and Desktop Integration • Automated Forecasting Adoption Model: SaaS Partner: Intelenex Business Drivers: The ITT Shape Cutting Division wanted to: better enable its Sales Force with email and mobile CRM capabilities simplify and automate its complex sales processes centrally manage and maintain customer contact information Why We Won: ITT was impressed with the feature-rich capabilities of Oracle Sales and Marketing Cloud Service (Fusion CRM), including sales performance management and integration. The company also liked the product's flexibility and scalability for future growth. Expected Benefits: Streamlined accurate forecasting Increased customer manageability Improved sales performance Better visibility to customer information

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Eloqua Sales Awareness Training for Partners - London, November 28-29

    - by Richard Lefebvre
    We are pleased to invite you to the free of charge Oracle Eloqua Sales Awareness Training for Partners - London, November 28&29 - COME LEARN WHAT ALL THE BUZZ IS ABOUT! WHEN SALES & MARKETING BOND, REVENUE GROWS It’s not one thing that makes a company successful with marketing automation – it’s the combination of the best implementation, flexible long-term support and access to a community of marketing innovation that ensures you have the assistance you need every step of the way. Our customers choose Oracle|Eloqua because they know when sales and marketing work together, leads are called upon, quotas are crushed and revenues climb. Learn how Oracle|Eloqua can help you put marketing to work for you! COVERED IN THIS TRAINING Eloqua and the Customer Experience Strategy How Modern Marketing Works Introduction to Eloqua – Whiteboard POV Go To Market Playbook Competitive Landscape Integration Options Service Offerings With case studies, workshops and product demos to help you on your way to a new world of marketing expertise LOGISTICS If you are ready to join us on this journey, now’s the time to let us know! Tell us a little about your experience – complete this form to help us know you better (Please ignore if your firm has already completed) Complete this form to RSVP by Thursday, November 21, 2013. Find out more about the Oracle|Eloqua, join the Oracle Eloqua Marketing Cloud Service Knowledge Zone and keep up on all the news! QUESTIONS? Contact [email protected] Please note that similar events will take place in other cities (Brussels, Istanbul and more come) later in 2014. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >