Search Results

Search found 2214 results on 89 pages for 'significant figures'.

Page 8/89 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • C question: Padding bits in unsigned integers and bitwise operations (C89)

    - by Anonymous Question Guy
    I have a lot of code that performs bitwise operations on unsigned integers. I wrote my code with the assumption that those operations were on integers of fixed width without any padding bits. For example an array of 32 bit unsigned integers of which all 32 bits available for each integer. I'm looking to make my code more portable and I'm focused on making sure I'm C89 compliant (in this case). One of the issues that I've come across is possible padded integers. Take this extreme example, taken from the GMP manual: However on Cray vector systems it may be noted that short and int are always stored in 8 bytes (and with sizeof indicating that) but use only 32 or 46 bits. The nails feature can account for this, by passing for instance 8*sizeof(int)-INT_BIT. I've also read about this type of padding in other places. I actually read of a post on SO last night (forgive me, I don't have the link and I'm going to cite something similar from memory) where if you have, say, a double with 60 usable bits the other 4 could be used for padding and those padding bits could serve some internal purpose so they cannot be modified. So let's say for example my code is compiled on a platform where an unsigned int type is sized at 4 bytes, each byte being 8 bits, however the most significant 2 bits are padding bits. Would UINT_MAX in that case be 0x3FFFFFFF (1073741823) ? #include <stdio.h> #include <stdlib.h> /* padding bits represented by underscores */ int main( int argc, char **argv ) { unsigned int a = 0x2AAAAAAA; /* __101010101010101010101010101010 */ unsigned int b = 0x15555555; /* __010101010101010101010101010101 */ unsigned int c = a ^ b; /* ?? __111111111111111111111111111111 */ unsigned int d = c << 5; /* ?? __111111111111111111111111100000 */ unsigned int e = d >> 5; /* ?? __000001111111111111111111111111 */ printf( "a: %X\nb: %X\nc: %X\nd: %X\ne: %X\n", a, b, c, d, e ); return 0; } is it safe to XOR two integers with padding bits? wouldn't I XOR whatever the padding bits are? I can't find this behavior covered in C89. furthermore is the c var guaranteed to be 0x3FFFFFFF or if for example the two padding bits were both on in a or b would c be 0xFFFFFFFF ? same question with d and e. am i manipulating the padding bits by shifting? I would expect to see this below, assuming 32 bits with the 2 most significant bits used for padding, but I want to know if something like this is guaranteed: a: 2AAAAAAA b: 15555555 c: 3FFFFFFF d: 3FFFFFE0 e: 01FFFFFF Also are padding bits always the most significant bits or could they be the least significant bits? Thanks guys EDIT 12/19/2010 5PM EST: Christoph has answered my question. Thanks! I had also asked (above) whether padding bits are always the most significant bits. This is cited in the rationale for the C99 standard, and the answer is no. I am playing it safe and assuming the same for C89. Here is specifically what the C99 rationale says for §6.2.6.2 (Representation of Integer Types): Padding bits are user-accessible in an unsigned integer type. For example, suppose a machine uses a pair of 16-bit shorts (each with its own sign bit) to make up a 32-bit int and the sign bit of the lower short is ignored when used in this 32-bit int. Then, as a 32-bit signed int, there is a padding bit (in the middle of the 32 bits) that is ignored in determining the value of the 32-bit signed int. But, if this 32-bit item is treated as a 32-bit unsigned int, then that padding bit is visible to the user’s program. The C committee was told that there is a machine that works this way, and that is one reason that padding bits were added to C99. Footnotes 44 and 45 mention that parity bits might be padding bits. The committee does not know of any machines with user-accessible parity bits within an integer. Therefore, the committee is not aware of any machines that treat parity bits as padding bits. EDIT 12/28/2010 3PM EST: I found an interesting discussion on comp.lang.c from a few months ago. Bitwise Operator Effects on Padding Bits (VelocityReviews reader) Bitwise Operator Effects on Padding Bits (Google Groups alternate link) One point made by Dietmar which I found interesting: Let's note that padding bits are not necessary for the existence of trap representations; combinations of value bits which do not represent a value of the object type would also do.

    Read the article

  • Problems in Table of Contents formatting

    - by ChrisW
    Two questions about captions in Word (they are related, hence the same post): Using Word 2010 (and its inbuilt equation editor) I've got figure captions which contain equations (well, actually, they represent chemical equations, such as nitrate, for which the correct representation is NO3- where the 3 is subscript and the - is superscript, but in the same column). However, when I generate a figure list, the equation displays as NO3- (with no subscript or superscript) - Word knows it's an equation though (the Equation Tools design ribbon/tab is displayed when I click on the NO3-). I've tried changing it from Professional to Linear and similar other obvious options, but still can't get it to display correctly. File to show this problem in action: http://dl.dropbox.com/u/101867759/EqtnTest.docx - note how the (chemical) equation for nitrate is rendered correctly in the 'caption' on Page 2, but not in the ToC on page 1. I have another caption where the whole figure is included in my list of figures. When I double click on the caption in my text, the caption is highlighted (as expected), but so is the figure (this doesn't happen with any of my other figures) so I assume that the figure has been 'linked' in some way to the text - how do I remove this link?

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

  • TicTacToe strategic reduction

    - by NickLarsen
    I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go. The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe. In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Social-Networking Startup, Hosting Plan

    - by pws5068
    I've created a social networking community which is soon ready to release, and I'm trying to decide on a type of hosting plan. I have considered options such as VPS and Reseller plans. I anticipate (or hope for at least) a significant amount of traffic/bandwidth in the not-too-distant future. If I open a reseller, will I receive the same amount of server lag during busy hours that I do with a shared account? How significant is the profit margin with the reseller option? Aside from generalized "configurability", what advantages merit purchasing a VPS? Is there anything stopping me from reselling space on a VPS account? Features I need Include: PHP, MySql, Unlimited Domains, Ruby on Rails, Remote Database Connections

    Read the article

  • What’s New in JD Edwards EnterpriseOne Release 9.1

    - by Breanne Cooley
    Oracle JD Edwards EnterpriseOne 9.1 offers customers significant updates to usability and business processes to improve productivity and bolster business value. This release addresses critical user needs, while delivering key enhancements in several areas, including:  New User Experience Release 9.1 offers significant enhancements to the user experience. New Web 2.0 features reduce task time and enable you to access meaningful information. Enhanced Reporting Oracle’s JD Edwards EnterpriseOne One View Reporting is a breakthrough solution that allows business users to create interactive reports - without IT support. Industry Specific Functionality This new release delivers key enhancements for the consumer goods, real estate management and manufacturing and distribution industries. Enhanced Support for Global Operations JD Edwards EnterpriseOne 9.1 supports global operations with several new features, including enhancements that consider the entire ERP business process associated with managing country of origin requirements. Productivity Features This new release offers new more tightly integrated business processes and other productivity advancements like improved data access and enhanced financial controls. Want to find out more? ü Bookmark the JD Edwards EnterpriseOne web page ü Listen to the Podcast: Announcing JD Edwards EnterpriseOne 9.1  ü Watch the Demo: JD Edwards EnterpriseOne 9.1 Features Demo ü Watch the Demo: JD Edwards EnterpriseOne One View Reporting Demo  ü Review the Data Sheet: JD Edwards EnterpriseOne Tools 9.1  New Training JD Edwards EnterpriseOne 9.1 training through Oracle University is designed to help you leverage these usability and productivity enhancements. The curriculum is aligned to the JD Edwards EnterpriseOne products and will teach your team how to efficiently and effectively implement and use your applications. Get started today! View available training and schedules.   -Jim Vonick, Oracle University Market Development Manager 

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • Getting More Out of UPK

    - by [email protected]
    Are you getting the most out of UPK? Remember the idea of streamlining your content creation efforts? How about the concept of collaboration during development? How are you leveraging the System Process Documents or Test Scripts? Is your training team benefiting from the creation of process documentation? Is UPK linked into the help menu of your application or your even at the browser level (Smart Help)? Many customers underutilize UPK. Some customers just think of UPK as a training creation solution or just for creating documentation. To get the full value of UPK you need to first evaluate how the UPK developer is installed. Single User or Multi User? If you have more than two developers of UPK, then there is a significant benefit from installing UPK in multi user mode. This helps drive collaboration, automatic version control and better facilitation of the workflow and state features with use of customized views for the developers. Has your organization installed Usage Tracking? How are the outputs deployed and for how many applications? If these questions have you thinking about your overall usage of UPK and you see significant improvement by using more of what UPK has to offer, then it could be time for a UPK Health Check. Contact your UPK Sales Consultant to help understand your environment and how to maximize the value of UPK and start getting more out of the product.

    Read the article

  • Oracle VM VirtualBox 4.0 Now Available

    - by Paulo Folgado
    Delivering on Oracle's commitment to open source, Oracle VM VirtualBox 4.0 is now available, further enhancing the popular, open source, cross-platform virtualization software.   "Oracle VM VirtualBox 4.0 is the third major product release in just over a year, and adds to the many new product releases across the Oracle Virtualization product line, illustrating the investment and importance that Oracle places on providing a comprehensive desktop to datacenter virtualization solution," says Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, Oracle. "With an improved user interface and added virtual hardware support, customers will find Oracle VM VirtualBox 4.0 provides a richer user experience." Part of Oracle's comprehensive portfolio of virtualization solutions, Oracle VM VirtualBox enables desktop or laptop computers to run multiple guest operating systems simultaneously, allowing users to get the most flexibility and utilization out of their PCs, and supports a variety of host operating systems, including Windows, Mac OS X, most popular flavors of Linux (including Oracle Linux), and Oracle Solaris. Oracle VM VirtualBox 4.0 delivers increased capacity and throughput to handle greater workloads, enhanced virtual appliance capabilities, and significant usability improvements. Support for the latest in virtual hardware, including chipsets supporting PCI Express, further extends the value delivered to customers, partners, and developers. Highlights of Oracle VM VirtualBox 4.0 include New Open Architecture - Oracle and community developers can now create extensions that customize Oracle VM VirtualBox and add features not previously available.Enhanced Usability - A new scalable display mode enables users to view more virtual displays on their existing monitors. Improvements to VM management, including visual VM previews, an optional attributes display, and easy launch shortcut creation enables administrators and power users to customize the interface to make it as simple or as comprehensive as required.Increased Capacity and Throughput - A new asynchronous I/O model for networked (iSCSI) and local storage delivers significant storage related performance improvements, while new optimizations allow larger datacenter-class workloads, such as Oracle's middeware, to be run on 32-bit Windows hosts for testing and demo purposes. Powerful Virtual Appliance Sharing Capabilities - Enhanced support for standards-compliant OVF appliances and added support for OVA format descriptors. All information about a VM may be stored in a single folder to facilitate easier direct sharing among VMs. Support for Latest Virtual Hardware - A new, modern virtual chipset supporting PCI Express and other hardware enhancements including high-definition audio devices helps ensure support for the most demanding virtual workloads.

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Fresh install on SSD with Ubuntu and Windows Vista, using whole disk encryption for Ubuntu

    - by nategator
    I would like to do a fresh install on a OCZ Vertex Plus R2 SSD 60GB drive I purchased on the cheap. Since the AES-encryption looks like it may not work optimally for this drive, I would like to set up a dual-boot to Windows Vista (the only Windows copy I have for clean install purposes) and Ubuntu 12.04 with the best encryption scheme possible. My plan is to have Windows around just in case I need to use a program that won't work with Wine and Ubuntu as my daily OS with all of my information secured in case the laptop is ever stolen or sold. Although this setup will not provide a lot of space, I think I can squeeze both OSes and have enough for second-computer office tasks. So, my questions are: Which OS should I install first, Ubuntu or Vista? Any special considerations when partitioning the drive? How should I install Ubuntu to ensure full disk encryption for the Linux partition(s) and or my daily computing? Is there a significant performance upgrade with doing a solo install of Ubuntu instead of a dual boot setup? Will TRIM, for example, work correctly? Are there any significant security concerns with going the route of a dual-boot, other than the fact that any activity on Windows may be fully recoverable if the drive is stolen or sold? Thanks in advance!

    Read the article

  • JCP 2012 Award Nominations Announced

    - by heathervc
      The 10th Annual JCP Program Award Nominations have been posted on JCP.org.  The community gets together every year during JavaOne to congratulate the winners and nominees at the JCP Community Party held in San Francisco. This year there are three awards: JCP Member/Participant of the Year, Outstanding Spec Lead, and Most Significant JSR. Member of the Year: Stephen Colebourne Markus Eisele Google JUG Chennai Werner Keil London Java Community and SouJava Antoine Sabot-Durand Outstanding Spec Lead Michael Ernst, JSR 308, Annotations on Java Types Victor Grazi, Credit Suisse, JSR 354, Money and Currency API Nigel Deakin, Oracle, JSR 343, Java Message Service 2.0 Pete Muir, Red Hat, JSR 346, Contexts and Dependency Injection for Java EE 1.1 Most Significant JSR API for JSON Processing, JSR 353 Money and Currency API, JSR  354 Java State Management, JSR 350 Java Message Service 2, JSR 343 JCP.Next, JSR 348, JSR 355, and JSR 358 Congratulations to the nominees; you can read the nomination text and more information about the awards here.  And remember to join us on Tuesday, 2 October at the Infusion Lounge to celebrate with the winners and nominees!

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • LaTeX limitation?

    - by Jayen
    Hi, I've hit an annoying problem in LaTeX. I've got a tex file of about 1000 lines. I've already got a few figures, but when I try to add another figure, It barfs with: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.937 \begin{figure}[t] If I move the figure to other parts of the file, I can get similar errors on different lines: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.657 \paragraph {A Centering Algorithm} If I comment out the figure, all is ok. %\begin{figure}[t] % \caption{Example decision tree, from Reiter and Dale [2000]} % \label{fig:relation-decision-tree} % \centering % \includegraphics[keepaspectratio=true]{./relation-decision-tree.eps} %\end{figure} If I keep just the begin and end like: \begin{figure}%[t] % \caption{Example decision tree, from Reiter and Dale [2000]} % \label{fig:relation-decision-tree} % \centering % \includegraphics[keepaspectratio=true]{./relation-decision-tree.eps} \end{figure} I get: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.942 \end {figure} At first, I thought maybe LaTeX has hit some limit, and I tried playing with the ulimits, but that didn't help. Any ideas? i've got other figures with graphics already. my preamble looks like: \documentclass[acmcsur,acmnow]{acmtrans2n} \usepackage{array} \usepackage{lastpage} \usepackage{pict2e} \usepackage{amsmath} \usepackage{varioref} \usepackage{epsfig} \usepackage{graphics} \usepackage{qtree} \usepackage{rotating} \usepackage{tree-dvips} \usepackage{mdwlist} \makecompactlist{quote*}{quote} \usepackage{verbatim} \usepackage{ulem}

    Read the article

  • How to access CSS generated content with JavaScript

    - by Boldewyn
    I generate the numbering of my headers and figures with CSS's counter and content properties: img.figure:after { counter-increment: figure; content: "Fig. " counter(section) "." counter(figure); } This (appropriate browser assumed) gives a nice labelling "Fig. 1.1", "Fig. 1.2" and so on following any image. Question: How can I access this from Javascript? The question is twofold in that I'd like to access either the current value of a certain counter (at a certain DOM node) or the value of the CSS generated content (at a certain DOM node) or, obviously, both information. Background: I'd like to append to links back-referencing to figures the appropriate number, like this: <a href="#fig1">see here</h> ------------------------^ " (Fig 1.1)" inserted via JS As far as I can see, it boils down to this problem: I could access content or counter via getComputedStyle: var fig_content = window.getComputedStyle( document.getElementById('fig-a'), ':after').content; However, this is not the live value, but the one declared in the stylesheet. I cannot find any interface to access the real live value.

    Read the article

  • Get percentiles of data-set with group by month

    - by Cylindric
    Hello, I have a SQL table with a whole load of records that look like this: | Date | Score | + -----------+-------+ | 01/01/2010 | 4 | | 02/01/2010 | 6 | | 03/01/2010 | 10 | ... | 16/03/2010 | 2 | I'm plotting this on a chart, so I get a nice line across the graph indicating score-over-time. Lovely. Now, what I need to do is include the average score on the chart, so we can see how that changes over time, so I can simply add this to the mix: SELECT YEAR(SCOREDATE) 'Year', MONTH(SCOREDATE) 'Month', MIN(SCORE) MinScore, AVG(SCORE) AverageScore, MAX(SCORE) MaxScore FROM SCORES GROUP BY YEAR(SCOREDATE), MONTH(SCOREDATE) ORDER BY YEAR(SCOREDATE), MONTH(SCOREDATE) That's no problem so far. The problem is, how can I easily calculate the percentiles at each time-period? I'm not sure that's the correct phrase. What I need in total is: A line on the chart for the score (easy) A line on the chart for the average (easy) A line on the chart showing the band that 95% of the scores occupy (stumped) It's the third one that I don't get. I need to calculate the 5% percentile figures, which I can do singly: SELECT MAX(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE ASC) AS SubQ SELECT MIN(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE DESC) AS SubQ But I can't work out how to get a table of all the months. | Date | Average | 45% | 55% | + -----------+---------+-----+-----+ | 01/01/2010 | 13 | 11 | 15 | | 02/01/2010 | 10 | 8 | 12 | | 03/01/2010 | 5 | 4 | 10 | ... | 16/03/2010 | 7 | 7 | 9 | At the moment I'm going to have to load this lot up into my app, and calculate the figures myself. Or run a larger number of individual queries and collate the results.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >