Search Results

Search found 1614 results on 65 pages for 'emps (expensive managemen'.

Page 8/65 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • Think It's Hard to Integrate the Front and Back Office? Think Again...

    - by ruth.donohue
    There's no doubt about it, fragmented customer information across application silos exist because integration isn't easy. It can be expensive. And it can be further complicated by proprietary architectures and vendors. But by leveraging Oracle Application Integration Architecure, Pillar Data Systems was able to integrate Oracle CRM On Demand with Oracle E-Business Suite in six weeks, reducing the time required to complete the integration by 50% and the maintenance by 20% to 25% to free IT resources to focus on strategic initiatives. Learn more...

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • 6 Facts About GlassFish Announcement

    - by Bruno.Borges
    Since Oracle announced the end of commercial support for future Oracle GlassFish Server versions, the Java EE world has started wondering what will happen to GlassFish Server Open Source Edition. Unfortunately, there's a lot of misleading information going around. So let me clarify some things with facts, not FUD. Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. But the message now is much clear: Oracle will commercially support only the proprietary product WebLogic, and invest on GlassFish Server Open Source Edition as the reference implementation for the Java EE platform and future Java EE 8, as a developer-friendly community distribution, and encourages community participation through Adopt a JSR and contributions to GlassFish. In comparison Oracle's decision has pretty much the same goal as to when IBM killed support for Websphere Community Edition; and to when Red Hat decided to change the name of JBoss Community Edition to WildFly, simplifying and clarifying marketing message and leaving the commercial field wide open to JBoss EAP only. Oracle can now, as any other vendor has already been doing, focus on only one commercial offer. Some users are saying they will now move to WildFly, but it is important to note that Red Hat does not offer commercial support for WildFly builds. Although the future JBoss EAP versions will come from the same codebase as WildFly, the builds will definitely not be the same, nor sharing 100% of their functionalities and bug fixes. This means there will be no company running a WildFly build in production with support from Red Hat. This discussion has also raised an important and interesting information: Oracle offers a free for developers OTN License for WebLogic. For other environments this is different, but please note this is the same policy Red Hat applies to JBoss EAP, as stated in their download page and terms. Oracle had the same policy for OGS. TL;DR; GlassFish Server Open Source Edition isn’t dead. Current and new OGS 2.x/3.x customers will continue to have support (respecting LSP). WebLogic is not necessarily more expensive than OGS. Oracle will focus on one commercially supported Java EE application server, like other vendors also limit themselves to support one build/product only. Community builds are hardly supported. Commercially supported builds of Open Source products are not exactly from the same codebase as community builds. What's next for GlassFish and the Java EE community? There are conversations in place to tackle some of the community desires, most of them stated by Markus Eisele in his blog post. We will keep you posted.

    Read the article

  • Hosting multiple low traffic websites on ec2

    - by Niko Sams
    We have like 30 websites with almost no traffic (<~10 visits / day) which are currently hosted on a dedicated server. We are evaluating hosting on Amazon EC2 however I'm not sure how to do that properly. One (micro) instance per website is too expensive ~10 websites on one instance (using apache virtual hosts) make auto scaling impossible (or at least difficult) Or is cloud computing not suitable for such a usecase?

    Read the article

  • Hardware and Software Working Together - What Does LJE say?

    - by Stephen Slade
     IDG News Service - Oracle CEO Larry Ellison said Oracle will continue to bet on selling high-end custom hardware for its software products, even amidst a growing trend toward roomfuls of cheap, generic servers. "You have to be in the hardware business and the software business, to get the best possible system," he said during a keynote speech at Oracle's OpenWorld conference in Tokyo. "We believe it's the right idea, we believe it's the next generation of computing, we believe all the pieces have to fit together." Ellison, as he has often done in the past, repeatedly referred to Apple as his "favorite example" of such tight integration. He was a close friend of Apple's co-founder Steve Jobs and previously served on Apple's board of directors.He said sales of Oracle's advanced servers were booming and generating around a billion dollars a year in revenue for the company, which has until recent years focused almost exclusively on its software offerings. With the explosion of popular online services and the increasing number of mobile devices that access them, demand is high for databases that can quickly respond to high numbers of relatively simple queries. While Oracle is pitching its expensive, finely-tuned machines to meet this requirement, Internet behemoths like Google, Facebook and Microsoft increasingly rely on armies of low-cost, easily replaceable servers. Ellison emphasized the high specifications of Oracle's servers, which come packed with multiple terabytes of RAM and flash-based storage for speed. Such machines are superior to large server farms, he said, because they require far less electricity and floor space, and are also cost competitive. When asked about whether purchasing such products would lock customers in to expensive hardware from Oracle, he promised that the company's software would always run on "multiple hardware sources."  Ellison, who spoke from Kyoto, Japan's ancient capital, was shown live online via webcast. The Oracle founder has a fondness for Japanese architecture and is staying in his large garden residence in the city Source: Ellison: Hardware-software integration key, Apple is best example. Oracle's founder and CEO reaffirmed his commitment to custom hardware for its software products  LINK to Computerworld article Apr 5, 2012 http://www.computerworld.com/s/article/9225858/Ellison_Hardware_software_integration_key_Apple_is_best_example?source=CTWNLE_nlt_entsoft_2012-04-09&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+computerworld%2Fs%2Ffeed%2Ftopic%2F173+%28Computerworld+Databases+News%29#disqus_thread

    Read the article

  • How to compare Shared versus VPS hosting? [closed]

    - by Itai
    Possible Duplicate: How to find web hosting that meets my requirements? While shopping around for a new hosting service, I have find that I have no idea how to decide between shared hosting (which I presently use for all my sites) service or go towards virtual (VPS) hosting which are always much more expensive. The real question is How to determine when shared hosting is no longer an option for a site? PS: This question covers some similar ground but is too specific for my needs.

    Read the article

  • How to draw texture to screen in Unity?

    - by user1306322
    I'm looking for a way to draw textures to screen in Unity in a similar fashion to XNA's SpriteBatch.Draw method. Ideally, I'd like to write a few helper methods to make all my XNA code work in Unity. This is the first issue I've faced on this seemingly long journey. I guess I could just use quads, but I'm not so sure it's the least expensive way performance-wise. I could do that stuff in XNA anyway, but they made SpriteBatch not without a reason, I believe.

    Read the article

  • Beware of Link Pedlars Bearing Gifts!

    As more organisations become aware of the value of links the number of links pedlars is increasing. On the face of it these people offer an easy, if sometimes expensive, way to improve your website's performance in the search engines.

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Guiding Management to the Correct Decision

    - by Blumer
    My supervisor (also a developer) and I have a running joke about writing a book called "Managing From Beneath: Subversively Guiding Management to the Right Decision" and including a number of "techniques" we've developed for helping those who make the decisions to make the right ones. So far, we've got (cynicism warning!): BIC It! BIC stands for "Bury In Committee." When a bad idea comes up that someone wants to champion, we try to get it deferred to a committee for input. Typically it will either get killed outright (especially if other members of the committee are competing for you as a resource), or it will be hung up long enough that the proponent forgets about it. Smart, Stupid, or Expensive? When someone gets a visionary idea, offer them three ways to do it: a smart way, a stupid way, and an expensive way. The hope is that you've at least got a 2/3 shot of not having to do it the way that makes a piece of your soul die. All-Pro. It's a preemptive pro/con list in which you get into the mind of the (pr)opponent and think what would be cons against doing it your way. Twist them into pros and present them in your pro list before they have a chance to present them as cons. Dependicitis. Link pending decisions together, ideally with the proponent's pet project as the final link in the chain. Use this leverage to force action on those that have been put off. Preemptive Acceptance. Sometimes it's clear that management is going to go a particular direction regardless of advice to the contrary, and it's time to make the best of it. Take the opportunity to get something else you need, though. Approach the sponsor out of the blue and take the first step: "You know, I've been thinking about it, and while it's not the route I would advise, as long as we can get the schedule and budget for Project Awesome loosened up, I can work some magic to make your project fly." So ... what techniques have you come up with to try to head off the problem projects or make the best of what may come?

    Read the article

  • Portal Server comparisons / TCoO

    - by Scott
    We have a client whom is looking to incorporate Oracle Portal into our next release. I'm newer to this team, but the team is currently working with Apache, so whichever Portal Server we choose will likely incur a bit of a learning curve. Is there any comparison (not marketing) out there which discusses the differences in the servers and/or the total cost of ownership on them? With 5 developers, installing RAD becomes expensive, which I'd assume they'd wish to move onto us with the change to Oracle Portal and WebSphere.

    Read the article

  • Right-size IT Budgets with Windows Server 2012 "Storage Spaces"

    - by KeithMayer
    What is the Largest Single Cost Category in Your IT Hardware Budget? If you're like most of the enterprise customer organizations that were surveyed when we were designing Windows Server 2012, your answer is probably the same as theirs: STORAGE! For the organizations we surveyed, we found that as much as 60% of their annual hardware budgets were allocated to expensive hardware SAN solutions due to ever-increasing storage requirements. Wouldn't it be nice to have some of that budget back

    Read the article

  • What are the advantages of mainframes?

    - by Scott Weinstein
    The downsides of Mainframes is well trodden ground; expensive, legacy, dwindling community, etc. I'm not particularly interested in the downsides, but I am curious if there are any benefits to mainframe hardware/software over the current Intel/AMD & Linux/Windows environment. I've been told that MFs are particularly good (and better than current servers) at heavy I/O loads. Is this still true?

    Read the article

  • Local server updates for the network

    - by Brendon
    I have setup one computer on our network as the file server. Because Internet here in Tanzania is both slow and expensive I would like that one system to download all the updates and then the other 10 computers on the network to get those update files from the server. I'm a bit of a noobie to Ubuntu, but really want to learn how to get this working smoothly so as to help other NGOs and schools here in Tanzania. Brendon

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • How Important is the Domain Name?

    The domain you use for a your web site can have a huge impact in the way that humans and search engine spiders perceive it. Domain names were once so expensive that only those wanting to protect a br... [Author: John Anthony - Computers and Internet - May 28, 2010]

    Read the article

  • How To Find Affordable Web Agency In India

    Finding an affordable web agency in India is a problem that one should not even think of. This is for the fact that every web agency is either way too expensive or if it makes a reasonable quote for ... [Author: John Anthony - Web Design and Development - May 11, 2010]

    Read the article

  • Best Practices in .NET XML Serialization of Complex Classes

    This article will show you XML serialization, so simply added in code, is not a magical stick. Serialization must be planned in full detail when working with complex classes, rather than expected to work by itself. Loss of planning work leads to redesign work later on, when maintaining serialization of original classes becomes too expensive or even hits the limit after which serialization of original classes is not possible without loss of data.

    Read the article

  • Using SOUNDEX and DIFFERENCE to Standardize Data in SQL Server

    My client wants to standardize address information for existing and future addresses collected for their customers, particularly the street suffixes. The application used to enter and collect address information has the street suffix separated from the address field, but it is a textbox instead of a drop down list therefore things are not standardized. I know there are some options out there to standarize data, but they would like a less expensive alternative. Are there any functions in SQL Server that I can use to standardized data?

    Read the article

  • dot42 vs Xamarin to develop in C# for Android phones [on hold]

    - by opt
    does anyone have experience with both dot42 and Xamarin to develop C# apps for Android? Could you please say why one should prefer one over the other? I like the free Visual Studio integration that is offered by dot42 (while Xamarin requires a subscription for this, and the business one which is quite expensive). I would also like to know if Visual Studio integration actually means that one could use all libraries available for desktop apps. Thank you.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >